Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hmm, these numbers seem very low. I wonder how these scores were determined.

  • ignirtoq@fedia.io
    link
    fedilink
    arrow-up
    14
    ·
    2 months ago

    Hmm, these numbers seem very low. I wonder how these scores were determined.

    They weren’t, because LLMs don’t have reasoning ability, at least not in the way you as a human do. They are generative models, so the short answer is the model most likely made the numbers up, though there’s a chance they pulled them directly from some training data that’s likely completely unrelated to the user’s prompt.

    What they generate is supposed to have similar multidimensional correlation as the input data, so there are complex relationships between what the question asked and the output it gave, but these processes don’t look anything like the steps you would go through to answer the same question.

    • slopjockey@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 months ago

      so the short answer is the model most likely made the numbers up

      Right crime, wrong perp. The Google overview “correctly” sourced the IQ scores from one of Arthur Jensen’s “studies” where he reports the IQ of every country in the world and fully makes up numbers for over a third of them

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        I expected the same, except the names being Richard Herrnstein and Charles Murray and/or Richard Lynn and Tatu Vanhanen.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 months ago

    "garbage in, garbage out" my beloathed

    Not the first time this has happened Google’s own AI overviews have misinterpreted u/fucksmith, eaten rocky onions and hallucinated cats on the moon before) but this is probably the worst such incident

    Anyways, sidenote time:

    Right now, there’s no legal precedent determining whether or not “AI overviews” like Google’s are protected under Section 230, but between shit like this and the recent lawsuit against character.ai, I suspect there’s gonna be plenty of effort to deny them Section 230 protection.

    If that happens, I expect it will put an immediate end to public-facing autoplag like this, as such products immediately become legal timebombs waiting to go off. I suspect it will also kill any future attempts at AI for the foreseeable future, for similar reasons.

    As for AI as a concept, which I’ve discussed previously, I expect this incident will help further a public notion of “artificial intelligence” being an oxymoronic concept, and of intelligence being something that either cannot be replicated by artificial means, or something which should not be replicated by artificial means.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        2 months ago

        Okay, but it’s still got nothing to do with the dishonest rhetorical technique called “JAQing off” (a.k.a. “Just Asking Questions,” a.k.a. “sealioning”).

        • kitnaht@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          2 months ago

          It’s kind of a … symptom … of the community we’re in. I wouldn’t read into it too deeply.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I think the usual output from the AI Overview (or at least the goal) is to give a long and ostensibly Fair and Balanced summary. So in this case it would be expected to throw out “some say that people from Australia are extra dumb because of these studies, but others contend that those studies were badly performed” or whatever. Asking the question on more words to represent both sides so that it can pretend not to be partisan.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        2 months ago

        Let me be more clear about this: an LLM trying to answer a question (successfully or otherwise) is doing basically the opposite of a human asking questions (disingenuously, as in “JAQing off,” or otherwise).

        I wasn’t trying to solicit comments trying to explain what the LLM was doing; my point was simply that OP is confused and used a term incorrectly in the title.

    • khalid_salad@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      It’s a reference to the fact that the kind of person who would try and justify this sort of race science is also the kind of person who is “just asking questions.” Combined with the tech industry’s tepid “it’s just a tool, it’s not inherently evil” bullshit, I think OPs point is obvious to anyone who isn’t a pedant, deliberately acting in bad faith.