When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 hours ago

    Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      7 hours ago

      What other networks?

      It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        here’s that same conversation with a human:

        “why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”

        What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

          “Johnny, what’s 2+2?”

          “5?”

          “No, Johnny, try again.”

          “Oh, it’s 4.”

          Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that “5” consistently gets him a “that’s wrong” response. So does “3”.

          But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

          He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

      • LillyPip@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        Have you tried doing this? I have, for *nearly a year, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.

        LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.

        e: time. Wow, where did this year go?