• FMT99@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    8
    ·
    2 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • 0x0@programming.dev
      cake
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      17
      ·
      2 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        edit-2
        2 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.