• tal@lemmy.today
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    10 个月前

    As an Artificial Intelligence proponent, I want to see the field succeed and go on to do great things. That is precisely why the current exaggerated publicity and investment around “AI” concerns me. I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described. The recent surge of interest in AI owing to Large Language Models (LLMs) like ChatGPT has put this vaguely defined term at the forefront of dialogue on technology. But LLMs are not meaningfully intelligent (we will get into that), yet it has become common parlance to refer to these chatbots as AI1 2.

    Pretty sure that this has been happening for as long as AI and similar things like machine learning have been a thing. Overstated promises, people consistently presenting research or products or investments using the sexiest terms they can manage. New term comes out (e.g. “Artificial General Intelligence”) to differentiate more-sophisticated AI, and they get latched onto and dragged down into the muck too.

    I think that the fix is to come up with terms attached to concrete technical capabilities, where there’s no fuzziness to exploit by people trying to promote their not-as-sophisticated-as-they’d-like-them-to-appear things.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 个月前

      AGI is not a new term. It’s been in use since the 90s and the concept has been around for much longer.

      I agree that we should use more specific terms whenever possible. I call LLMs “LLMs” or “language models”. Not that it’s inaccurate to call them AI, but it’s not useful either. AI is an extraordinarily broad term. Pac-Man had AI. And there’s a large portion of the population who thinks it means something much, much more lofty and specific than it ever really has. At this point, the term should probably be abandoned. Any attempt to reclaim it is bound to fail.

      I see this as yet another example of a technical term being bastardized by mainstream press who do not understand the field. It happens all the time with tech. I remember when “virus” actually meant something; the industry eventually abandoned the term because it was bastardized to the point of uselessness; now we just say “malware” and if we need to refer to viruses specifically…well we just don’t for the most part.

      This is a linguistic problem more than a technical problem.

      • Pete Hahnloser@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        10 个月前

        I also go to great lengths to say LLMs vs. AI.

        But, I also spent most of my career in the “mainstream press,” and reporters can be surprisingly blasé about what technology means if that isn’t their beat. I’ve had to spike a story or two about new police tech that includes zero quotes from anyone outside the PD and their vendor. I’ve held an order of magnitude more so they could be fixed ahead of publication.

        And this was 15-20 years ago, when newsrooms employed people with more than three years of experience. I heavily curate my news diet on an ongoing basis, as outlets can go down the shitter in a matter of weeks with buyouts.

        What we get today from many supposedly reliable outlets is not helpful to society.

      • tal@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        10 个月前

        AGI is not a new term. It’s been in use since the 90s and the concept has been around for much longer.

        It’s not new today, but it post-dates “AI” and hit the same problem then.

        • jansk@beehaw.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          10 个月前

          And before AI we had “Thinking Machines”.

          Perhaps we should go back to that. OpenAI et al can brand themselves “Think-Tech”

    • eveninghere@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      10 个月前

      What’s funny, we complain about the terminology use of AI, but nobody can actually define the intelligence.

      • vexikron@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        10 个月前

        https://en.m.wikipedia.org/wiki/Intelligence

        Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.

        LLMs are pretty capable of abstraction and understanding.

        Though they obviously use logic in that they are constructed from/of it, they are not really capable of actual logical analysis, beyond emulating it.

        They can’t really do any of the other attributes of intelligence at all, beyond basically decently to poorly emulating them.

        • eveninghere@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          10 个月前

          The problem with these definitions is that they are verbal. Some could argue ChatGPT is capable of understanding, while others could do the opposite. I don’t even believe it is capable of abstraction.

          The Turing test was novel in that we could test the intelligence of AIs without actually defining intelligence. And it’s still useful because researchers probably can’t agree on a rigorous definition of intelligence.