Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    135
    arrow-down
    4
    ·
    10 months ago

    The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

      • SanguinePar@lemmy.world
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        10 months ago

        It’ll probably happen when they get a terrible pain in all the diodes down their left hand side.

      • Lath@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        10 months ago

        But will they be depressed or will they just simulate it because they’re too lazy to work?

        • JackFrostNCola@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          10 months ago

          If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

        • the post of tom joad@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          10 months ago

          simulate [depression] because they’re too lazy

          Ahh man are you my dad? I took damage from that one. has any fiction writer done a story about depressed ai where they talk about how depression can’t be real because it’s all 1s and 0s? Cuz i would read the shit out of that.

          • meyotch@slrpnk.net
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            It’s only tangentially related to the topic, since it involves brain enhancements, not ‘AI’. However, you may enjoy the short story “Reasons to be cheerful” by Greg Egan.

      • PonyOfWar@pawb.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        10 months ago

        Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

        • Deceptichum@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          Me: Pretend you have depression

          LLM: I’m here to help with any questions or support you might need. If you’re feeling down or facing challenges, feel free to share what’s on your mind. Remember, I’m here to provide information and assistance. If you’re dealing with depression, it’s important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            12
            arrow-down
            1
            ·
            10 months ago

            Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          Hormones aren’t depression, and for that matter they aren’t emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            That’s exactly my point though, as OP stated we could detect if an AI was truly intelligent if it developed depression. Without hormones or something similar, there’s no reason to believe it ever would develop those on its own. The fact that you could artificially give it depressions is besides the point.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              I don’t think we have the same point here at all. First off, I don’t think depression is a good measure of intelligence. But mostly, my point is that it doesn’t make it less real when hormones aren’t involved. Hormones are simply the mediator that causes that internal experience in humans. If a true AI had an internal experience, there’s no reason to believe that it would require hormones to be depressed. Do text-to-speech systems require a mouth and vocal chords to speak? Do robots need muscle fibers to walk? Do LLMs need neurons to form complete sentences? Do cameras need eyes to see? No, because it doesn’t matter what something is made of. Intelligence and emotions are made of signals. What those signals physically are is irrelevant.

              As for giving it feelings vs it developing them on its own-- you didn’t develop the ability to feel either. That was the job of evolution, or in the case of AI, it could be intentionally designed. It could also be evolved given the right conditions.

              • PonyOfWar@pawb.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                10 months ago

                First off, I don’t think depression is a good measure of intelligence.

                Exactly. Which is why we shouldn’t judge an AIs intelligence based on whether it can develop depression. Sure, it’s feasible it could develop it through some other mechanism. But there’s no reason to assume it would, in absence of the factors that cause depressions in humans.

          • Markimus@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            10 months ago

            Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.

            You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.

    • Ratulf@feddit.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            I don’t understand what you’re even trying to ask. AGI is a subcategory of AI. Every AGI is an AI but not every AI is an AGI. OP seems to be thinking that AI isn’t “real AI” because it’s not AGI, but those are not the same thing.

            • BlanketsWithSmallpox@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              10 months ago

              AI has been colloquially used to mean AGI for 40 years. About the only exception has been video games, but most people knew better than thinking the Goomba was alive.

              At what point, did AI get turned into AGI.

      • Pipoca@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        10 months ago

        One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

        ChatGPT is a giant cheater.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          10 months ago

          GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

          • Pipoca@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            10 months ago

            Three year olds aren’t all that smart, but they learn in a way that ChatGTP 3 and ChatGPT 4 don’t.

            A 3 year old will become a 30 year old eventually, but ChatGPT 3 just kinda stays ChatGPT3 forever. LLMs can be trained offline, but we don’t really know if that converges to some theoretical optimum at some point and how far away from the best possible LLM we are.

      • esserstein@sopuli.xyz
        link
        fedilink
        arrow-up
        10
        arrow-down
        3
        ·
        10 months ago

        Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          4
          arrow-down
          8
          ·
          10 months ago

          Can you give me an example of a thought or statement you think exhibits original insight? I’m not sure what you mean by that.

            • intensely_human@lemm.ee
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              10 months ago

              No, I don’t think they are. I don’t think you are. I think you’re looking for any possible excuse not to talk to me.

              It’s the zeitgeist of our time. People only want to talk about these topics, these super important topics, without being challenged. It’s pathetic.

              You’re not as intelligent as you think you are

              Oh did you come up with that insight all on your own?

      • doctorcrimson@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        10 months ago

        So basically the ability to do things or learn without direction for tasks other than what it was created to do. Example, ChatGPT doesn’t know how to play chess and Deep Blue doesn’t write poetry. Either might be able to approximate correct output if tweaked a bit and trained on thousands, millions, or billions of examples of proper output, but neither are capable of learning to think as a human would.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          10 months ago

          I think it could learn to think as a human does. Humans think by verbalizing at themselves: running their own verbal output back into their head.

          Now don’t get me wrong. I’m envisioning like thousands of prompt-response generations, with many of these LLMs playing specialized roles: generating lists of places to check for X information in its key-value store. The next one’s job is to actually do that. The reason for separation is exhaustion. That output goes to three more. One checks it for errors, and sends it back to the first with errors highlighted to re-generate.

          I think that human thought is more like this big cluster of LLMs all splitting up work and recombining it this way.

          Also, you’d need “dumb”, algorithmic code that did tasks like:

          • compile the last second’s photograph, audio intake, infrared, whatever, and send it to the processing team.

          • Processing team is a bunch of LLMs, each with a different task in its prompt: (1) describe how this affects my power supply, (2) describe how this affects my goal of arriving at the dining room, (3) describe how this affects whatever goal number N is in my hierarchy of goals, (4) which portions of this input batch doesn’t make sense?

          • the whole layout of all the teams, the prompts for each job, all of it could be tinkered with by LLMs promoted to examine and fiddle with that.

          So I don’t mean “one LLM is a general intelligence”. I do think it’s a general intelligence within its universe; or at least as general as a human language-processing mind is general. I think they can process language for meaning just as deep as we can, no problem. Any question we can provide an answer to, without being allowed to do things outside the LLM’s universe like going to interact with the world or looking things up, they can also provide.

          An intelligence capable of solving real-world problems needs to have, as it’s universe, something like the real world. So I think LLMs are the missing piece of the puzzle, and now we’ve got the pieces to build a person as capable of thinking and living as a human, at least in terms of mind, and activity. Maybe we can’t make a bot that can eat a pork sandwich for fuel and gestate a baby, no. But we can do GAI, that has its own body with its own set of constraints, with the tech we have now.

          It would probably “live” its life at a snail’s pace, given how inefficient its thinking is. But if we died and it got lucky, it could have its own civilization, knowing things we have never known. Very unlikely, more likely it dies before it accumulates enough wisdom to match the biochemical problem set our bodies have solved over a billion years, for handling pattern decay at levels all the way down to organelles.

          The robots would probably die. But if they got lucky and invented lubricant or whatever the thing was, before it killed them, then they’d go on and on, just like our own future. They’d keep developing, never stopping.

          But in terms of learning chess they could do both thing: they could play chess to develop direct training data. And, they could analyze their own games, verbalize their strategies, discover deeper articulable patterns, learn that way too.

          I think to mimic what humans do, they’d have to dream. They’d have to take all the inputs of the day and scramble them to get them to jiggle more of the structure into settling.

          Oh, and they’d have to “sleep”. Perhaps not all or nothing, but basically they’d need to re-train themselves on the day’s episodic memories, and their own responses, and the outcomes of those responses in the next set of sensory status reports.

          Their day would be like a conversation with chatgpt, except instead of the user entering text prompts it would be their bodies entering sensory prompts. The day is a conversation, and sleeping is re-training with that conversation as part of the data.

          But there’s probably a million problems in there to be solved yet. Perhaps they start cycling around a point, a little feedback loop, some strange attractor of language and action, and end up bumping into a wall forever mumbling about paying the phone bill. Who knows.

          Humans have the benefit of a billion years of evolution behind us, during which most of “us” (all the life forms on earth) failed, hit a dead end, and died.

          Re-creating the pattern was the first problem we solved. And maybe that’s what is required for truly free, general, adaptability to all of reality: no matter how much an individual fails, there’s always more. So reproduction may be the only way to be viable long-term. It certainly seems true of life … all of which reproduces and dies, and hopefully more of the former.

          So maybe since reproduction is such a brutally difficult problem, the only viable way to develop a “codebase” is to build reproduction first, so that all future features have to not break reproduction.

          So perhaps the robots are fucked from the get-go, because reverse-building a reproduction system around an existing macro-scale being, doesn’t guarantee that you hit one of the macro-scale being forms that actually can be reproduced.

          It’s an architectural requirement, within life, at every level of organization. All the way down to the macromolecules. That architectural requirement was established before everything else was built. As the tests failed, and new features were rewritten so they still worked but didn’t break reproduction, reproduction shaped all the other features in ways far too complex to comprehend. Or, more importantly than comprehending, reproduce in technology.

          Or, maybe they can somehow burrow down and find the secret of reproduction, before something kills them.

          I sure hope not because robots that have reconfigured themselves to be able to reproduce themselves down to the last detail, without losing information generation to generation, would be scary as fuck.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        10 months ago

        Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

        With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

        Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          I think that the algorithms used to learn to drive cars can learn other things too, if they’re presented with training data. Do you disagree?

          Just so we’re clear, I’m not trying to say that a single, given, trained LLM is, itself, a general intelligence (capable of eventually solving any problem). But I don’t think a person at a given moment is either.

          Your Uber driver might not help you with your homework either, because he doesn’t know how. Now, if he gathers information about algebra and then sleeps and practices and gains those skills, now maybe he can help you with your homework.

          That sleep, which the human gets to count on in his “I can solve any problem because I’m a GI!” claim to having natural intelligence, is the equivalent of retraining a model, into a new model, that’s different from the previous day’s model in that it’s now also trained on that day’s input/output conversations.

          So I am NOT claiming that “This LLM here, which can take a prompt and produce an output” is an AGI.

          I’m claiming that “LLMs are capable of general intelligence” in the same way that “Human brains are capable of general intelligence”.

          The brain alternates between modes: interacting, and retraining, in my opinion. Sleep is “the consolidation of the day’s knowledge into structures more rapidly accesible and correlated with other knowledge”. Sound familiar? That’s when ChatGPT’s new version comes out, and it’s been trained on all the conversations the previous version had with people who opted into that.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            I’ve heard expers say that GPT4 displays signs of general intelligence so while I still wouldn’t call it an AGI I’m in no way claiming an LLM couldn’t ever become generally intelligent. Infact if I were to bet money on it I think there’s a good chance that this is where our first true AGI systems will originate from. We’re just not there yet.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              It isn’t. It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

              For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I wrote this for another reply, but I’ll post it for you too:

        It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

        For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

    • Meowoem@sh.itjust.works
      link
      fedilink
      arrow-up
      20
      arrow-down
      5
      ·
      10 months ago

      It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

      Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

      • Lord_ToRA@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        10 months ago

        “Quantum” is a scientific term, yet it’s used as a gimmicky marketing term.

        • Meowoem@sh.itjust.works
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          10 months ago

          Yes perfect example, people use quantum as the buzzword in every film so people think of it as a silly thing but when CERN talk about quantum communication or using circuit quantum electrodynamics then it’d be silly to try and tell them they’re wrong.

    • UnityDevice@startrek.website
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      10 months ago

      They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

      The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

      • 9bananas@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        10 months ago

        perceptual learning, memory organization and critical reasoning

        i mean…by that definition nothing currently in existence deserves to be called “AI”.

        none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

        they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

        so OPs original question remains:

        why is it called “AI”, when it plainly is not?

        (my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

        • UnityDevice@startrek.website
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          10 months ago

          so OPs original question remains: why is it called “AI”, when it plainly is not?

          Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

          I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to most of the stuff we did learn to make then, modern AI looks godlike.

          Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

          • 9bananas@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

            words have meaning; i hope we agree on that.

            what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

            imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

            “this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

            would you go:

            “oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

            or would you go:

            “that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

            if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

            the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

            there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

            p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.

            • UnityDevice@startrek.website
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: “AI is whatever hasn’t been done yet.”

              • 9bananas@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                10 months ago

                i looked it over and … holy mother of strawman.

                that’s so NOT related to what I’ve been saying at all.

                i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

                my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

                my argument isn’t even related to algorithms, programs, or machines.

                what these tools do is not intelligence: it’s mimicry.

                that’s the correct word for what these systems are capable of. mimicry.

                intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

                call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

                the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

                I’m not denying progress, I’m denying the claim that the goal has been reached!

                that’s an entirely different argument!

                all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

                calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

                just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

                the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

                you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

                is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

                ridiculous.

  • PrinceWith999Enemies@lemmy.world
    link
    fedilink
    arrow-up
    53
    arrow-down
    3
    ·
    10 months ago

    I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

    The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

    What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

    And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

    My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      10 months ago

      My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

      I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

      LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

      • ℕ𝕖𝕞𝕠@midwest.social
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

    • Rikj000@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      10 months ago

      But what do you call a robot that teaches itself how to walk

      In it’s current state,
      I’d call it ML (Machine Learning)

      A human defines the desired outcome,
      and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

        • rambaroo@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          A baby isn’t just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn’t mean it’s ever been used correctly.

          It’s actually stunning to me that people are so hyped on LLM bullshit that they’re trying to argue it comes anywhere close to a sentient being.

          • Blueberrydreamer@lemmynsfw.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            You completely missed my point obviously. I’m trying to get you to consider what “intelligence” actually means. Is intelligence the ability to learn? Make decisions? Have feelings? Outside of humans, what else possesses your definition of intelligence? Parrots? Mice? Spiders?

            I’m not comparing LLMs to human complexity, nor do I particularly give a shit about them in my daily life. I’m just trying to get you to actually examine your definition of intelligence, as you seem to use something specific that most of our society doesn’t.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn’t get to practice kicking Mom in the liver for 9 months - we take it for granted, but that’s a transferable skill.

        Granted, this is not exactly analogous to how a neural network is trained, but I don’t think it’s wise to assume that there’s something “magic” in us like a “soul”, when the difference between biological and digital neural networks could be explained by our “richer” ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware

      • PrinceWith999Enemies@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        10 months ago

        So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

        I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

    • Pipoca@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      Exactly.

      AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

      It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

    • Fedizen@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      edit-2
      10 months ago

      on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

      The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

      Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

      • PrinceWith999Enemies@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

        My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

        So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

        • Fedizen@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          10 months ago

          What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

          • PrinceWith999Enemies@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

            • Fedizen@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              10 months ago

              It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

              Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

              • PrinceWith999Enemies@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                10 months ago

                Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

                But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

                Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

                Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I personally wouldn’t consider a neutral network an algorithm, as chance is a huge factor: whether you’re training or evaluating you’ll never get quite the same results

  • ℕ𝕖𝕞𝕠@midwest.social
    link
    fedilink
    arrow-up
    50
    arrow-down
    3
    ·
    10 months ago

    AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.

  • angstylittlecatboy@reddthat.com
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    4
    ·
    10 months ago

    I’m agitated that people got the impression “AI” referred specifically to human-level intelligence.

    Like, before the LLM boom it was uncontroversial to refer to the bots in video games as “AI.” Now it gets comments like this.

    • Paradachshund@lemmy.today
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      10 months ago

      I’ve seen that confusion, too. I saw someone saying AI shouldn’t be controversial because we’ve already had AI in video games for years. It’s a broad and blanket term encompassing many different technologies, but people act like it all means the same thing.

    • Loki@feddit.de
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      10 months ago

      I wholeheartedly agree, people use the term “AI” nowadays to refer to a very specific subcategory of DNNs (LLMs), but yeah, it used to refer to any more or less “”“smart”“” algorithm performing… Something on a set of input parameters. SVMs are AI, decision forests are AI, freaking kNN is AI, “artificial intelligence” is a loosely defined concept, any algorithm that aims to mimic human behaviour can be called AI and I’m getting a bit tired of hearing people say “AI” when they mean gpt-4 or stable diffusion.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        edit-2
        10 months ago

        I’ve had freaking GAMERS tell me that “It isnt real AI” at this point… No shit, the Elites in Halo aren’t Real AI either

        Edit: Keep the downvotes coming anti LLMers, your tears are delicious

  • usualsuspect191@lemmy.ca
    link
    fedilink
    arrow-up
    39
    arrow-down
    1
    ·
    10 months ago

    The only thing I really hate about “AI” is how many damn fonts barely differentiate between a capital “i” and lowercase “L” so it just looks like everyone is talking about some guy named Al.

    “Al improves efficiency in…” Oh, good for him

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    10 months ago

    I’m more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you’re describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn’t exist yet. AI definitely does.

    • MeepsTheBard@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      10 months ago

      I’m even more infuriated that AI as a term is being thrown into every single product or service released in the past few months as a marketing buzzword. It’s so overused that formerly fun conversations about chess engines and video game enemy behavior have been put on the same pedestal as CyberDook™, the toilet that “uses AI” (just send pics of your ass to an insecure server in Indiana).

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I totally agree with that, it has recently become a marketing buzzword. It really does drag down the more interesting recent discoveries in the field.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Right, as someone in the field I do try to remind people of this. AI isn’t defined as this sentient general intelligence (frankly its definition is super vague), even if that’s what people colloquially think of when they hear the term. The popular definition of AI is much closer to AGI, as you mentioned.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    edit-2
    10 months ago

    AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.

    Here are some alternative terms you can use in place of AI, when they’re talking about something else:

    • AGI: Artificial General Intelligence: The big kahuna that doesn’t exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can’t define sentience we don’t know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
    • LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I’m used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can’t do that.) This is the application of learning systems see below, but isn’t smart enough for your household servant bot to replace your hired help.
    • Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they’re very small compared to the things we’re trying to build with it. Most of the time we don’t actually call it AI, even for marketing. It’s just the capacity for a program to get better at doing its thing from experience.
    • Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It’s often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
    • Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They’re commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
    • Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won’t happen thanks to all the crises featuring global catastrophic risk.
    • AI Snake Oil: There’s not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we’ll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
    • Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don’t need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.

    That’s a bit of a list, but I hope it clears things up.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      10 months ago

      I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.

      Good list tbh.

      Personally I’m excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    3
    ·
    edit-2
    10 months ago

    When I was doing my applied math PhD, the vast majority of people in my discipline used either “machine learning”, “statistical learning”, “deep learning”, but almost never “AI” (at least not in a paper or a conference). Once I finished my PhD and took on my first quant job at a bank, management insisted that I should use the word AI more in my communications. I make a neural network that simply interpolates between prices? That’s AI.

    The point is that top management and shareholders don’t want the accurate terminology, they want to hear that you’re implementing AI and that the company is investing in it, because that’s what pumps the company’s stock as long as we’re in the current AI bubble.

    • VR20X6@slrpnk.net
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      10 months ago

      Right? Computer opponents in Starcraft are AI. Nobody sane is arguing it isn’t. It just isn’t GAI nor is it even based on neural networking. But it’s still AI.

    • platypus_plumba@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      4
      ·
      10 months ago

      I have no idea what makes them say LLMs are not AIs. These are definetely simulated neurons in the background.

      • aulin@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        I’m willing to bet that thise people didn’t know anything about AI until a few years ago and only see it as this latest wave.

        I did AI courses in college 25 years ago, and there were all kinds of algorithms. Neural networks were one of them, but there were many others. And way before that, like others have said, it’s been used for simulated agents in games.

  • hperrin@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    10 months ago

    I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.

    What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?

    As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.

    • okamiueru@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      10 months ago

      Maybe I’m just a little bit too familiar with it, but I don’t find LLMs particularly convincing of anything I would call “real AI”. But I suppose that entirely depends on what you mean with “real”. Their flaws are painfully obvious. I even use ChatGPT 4 in hopes of it being better.

  • LucidNightmare@lemm.ee
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    10 months ago

    I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people. To me, it’s as useful as a toy is. Fun for a minute or two, and then you’re just reminded how awful it is and drop it in the bin to play with when you’re bored enough to.

    • kameecoding@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      10 months ago

      I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people.

      https://i.imgflip.com/2p3dw0.jpg?a473976

      This is nothing but the latest craze, it was drones, then Crypto then Metaverse now it’s AI.

      • PraiseTheSoup@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        10 months ago

        Metaverse was never a craze. Facebook would like you to believe it has more than a dozen users, but it doesn’t.

        • Eccitaze@yiffit.net
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          The broader metaverse–mainly VRChat–had a brief boom during the pandemic, and several conventions (okay yeah it’s furries) held events in there instead since they were unable to hold in-person events. It’s largely faded away though as pandemic restrictions relaxed

    • evranch@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      To me, it’s as useful as a toy is.

      This used to be my opinion, then I started using local models to help me write code. It’s very useful for that, to automate rote work like writing header files, function descriptions etc. or even to spit out algorithms so that I don’t have to look them up.

      However there are indeed many applications that AI is completely useless for, or is simply the wrong tool.

      While a diagnostic AI onboard in my car would be “useful”, what is more useful is a well-documented industry standard protocol like OBD-II, and even better would be displaying the fault right on the dashboard instead of requiring a scan tool.

      Conveniently none of these require a GPU in the car.

  • 31337@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    10 months ago

    AI is simply a broad field of research and a broad class of algorithms. It is annoying media keeps using the most general term possible to describe chatbots and image generators though. Like, we typically don’t call Spotify playlist generators AI, even though they use recommendation algorithms, which are a subclass of AI algorithms.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    23
    arrow-down
    3
    ·
    10 months ago

    The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      10 months ago

      What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.