• oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 hours ago

    I work for a consulting company and they’re truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It’s maddening.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    14 hours ago

    I still think it’s better to refer to LLMs as “stochastic lexical indexes” than AI

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    13
    ·
    1 day ago

    These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.

    They’re completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.

    If they receive an input that doesn’t have a strong correlation to their training, they just output whatever bullshit comes close, whether it’s true or not. Which makes them truly dangerous.

    And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

    I can’t wait for this stupid AI craze to eat its own tail.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      13 hours ago

      I generally agree with your comment, but not on this part:

      parroting the responses to questions that already existed in their input.

      They’re quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

      They’re completely incapable of critical thought or even basic reasoning.

      Critical thought, generally no. Basic reasoning, that they’re somewhat capable of. And chain of thought amplifies what little is there.

    • Neshura@bookwormstory.social
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      edit-2
      23 hours ago

      Last I checked (which was a while ago) “AI” still can’t pass the most basic of tasks such as “show me a blank image”/“show me a pure white image”. the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I’m willing to debate the potentials of AI again once they manage to do that without those “benchmarks” getting special attention in the training data.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        14 hours ago

        Because it’s not AI, it’s sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

        Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.

        Their AI will be possible when it’ll be able to want something and decide something, with that moment based on entropy and not extrapolation.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          15
          ·
          22 hours ago

          I will say the next attempt was interesting, but even less of a good try.

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        23 hours ago

        Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

        That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          14 hours ago

          Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.

          These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.

          I just want to see the moment when it all bursts. I’ll be so gleeful. I’ll go and buy an IPA and will laugh in every place in the Internet I’ll see this discussed.

      • gr3q@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        16 hours ago

        I tested chatgpt, it needed some nagging but it could do it. Needed the size, blank and white keywords.

        Obviously a lot harder than it should be, but not impossible.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      15 hours ago

      Synthesis versus generation. Yes.

      And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

      It’s a tower of Babel IRL.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      17 hours ago

      The current AI discussion I’m reading online has eerie similarities to the debate about legalizing cannabis 15 years ago. One side praises it as a solution to all of society’s problems, while the other sees it as the devil’s lettuce. Unsurprisingly, both sides were wrong, and the same will probably apply to AI. It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    1 day ago

    Of course they don’t, logical reasoning isn’t just guessing a word or phrase that comes next.

    As much as some of these tech bros want human thinking and creativity to be reducible to mere pattern recognition, it isn’t, and it never will be.

    But the corpos and Capitalists don’t care, because their whole worldview is based in the idea that humans are only as valuable as the profitability they generate for a company.

    They don’t see any value in poetry, or philosophy, or literature, or historical analysis, or visual arts unless it can be patented, trademarked, copyrighted, and sold to consumers at a good markup.

    As if the only difference between Van Goh’s art and an LLM is the size of sample data and efficiency of an algorithm.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      I’m just thinking - 12 years ago there was a lot of talk of politicians and big corpo chiefs being replaceable with a shell script. As both a joke and an argument in favor of something requiring change.

      One can say it was saying that these people are not needed - engineers can build their replacements.

      In some sense AI is politicians and big bosses trying to build a replacement for engineers, using means available to these people.

      Maybe they noticed, got pissed and are trying to enact revenge. Sort of a domain area war.

    • leisesprecher@feddit.org
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      1 day ago

      You don’t have to get all philosophical, since the value art is almost by definition debatable.

      These models can’t do basic logic. They already fail at this. And that’s actually relevant to corpos if you can suddenly convince a chatbot to reduce your bill by 60% because bears don’t eat mangos or some other nonsensical statement.

      • Lettuce eat lettuce@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 day ago

        It’s all connected, the reasons why it can’t do basic logical reasoning are the same for why it can’t replace human art.

        It’s because neither of those activities are mere pattern recognition and statistical inference, which is all LLMs will ever be.

        • 9488fcea02a9@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          15 hours ago

          LLMs and image generating models are completely different things. Outputting an image doesnt require or benefit from reason and logic (other than making the model “understand” the prompt). Drawing a three headed monkey isnt “logical” and doesnt follow “reason” but that’s ok because art isnt about making photorealisitic images.

          AI images could totally be useful as a tool in art. “But a computer made it! It’s not art!” It’s the same tired argument we heard about electronic music before.

          But the fediverse seems to have such a hate boner for ANYTHING associated with AI (dont get me wrong, there is lots to hate. Mostly with tech-bro grifting…) that people are unable to see that these can be useful complements to human creativity.

          Here’s another example… People crying that when an image contains AI generated elements, or maybe a video game contains some AI assets. People fly into a rage and want to dismiss the ENTIRE work and throw it all out. Human art doesnt require 100% human hands to make. Go look at any famous painting by a renaissance master. Did you know a lot of these guys had whole workshops of lackeys filling in background details for them? Are we going to throw out all the raphael and rembrandt paintings because they had assistance from other uncredited people?

          Same with AI. Why cant an artist spend MORE time on important details and let AI draw some happy little trees in the background?

          • Lettuce eat lettuce@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            I think you’re reading too deep into what I was saying. Perhaps I wasn’t being clear, my bad if so.

            I’m not against AI tools to assist people’s work. Using them for grammar/spellcheck, code completion and automated testing, artwork help for filling in repetitive background details/textures, automatically removing background details in pictures like dumpsters or people photo bombing, etc.

            What I am against is the grifting, the near religious devotion by tech bros to AI replacing humans in all areas of life, and the fact that the groups and companies controlling almost all of the development of this tech are multi-billion/trillion dollar corpos that don’t make all aspects of their tech open source and are 100% motivated by profit.

            • 9488fcea02a9@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 hours ago

              Sorry, my comment wasnt really directed at you specifically… Just the fediverse general hate for all things AI

              yours was the first that mentioned “art” which triggered me, lol

              I think we are actually both on the same page… You have a reasonable view of the whole AI thing. It’s rare on lemmy/mastodon

              • Lettuce eat lettuce@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Thanks for your response. Yeah, I think the issue isn’t the technology, it’s who controls and owns it.

                I doubt it would be anywhere near as controversial if it were all fully open source and run by public organizations and communities that were interested in bettering the human experience and reducing mundane work vs maximizing profitability.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Apple’s study proves that LLM-based AI models are flawed because they cannot reason

    This really isn’t a good title, I think. It was understood that LLM-based models don’t reason, not on their own.

    A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of “score” for an AI’s capability.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 day ago

      Water isn’t wet, water wets things, and watered things are wet by the wet but the water ain’t wet as it simply causes wet and thus water isn’t truly wet as water is pure water and pure water isn’t wet and water is not wet and water isn’t wet it’s not wet it’s not wet it’s not dry it’s not wet and it’s not wet it is wet it’s wet and you can see it is wet but it doesn’t look like it it’s dry it’s just wet and it’s wet so I just need it and it’s wet it’s not like it’s dry it’s wet it’s wet so it’s not dry but it’s wet it’s not wet so it’s wet it’s not dry and it’s not dry it’s wet and I just want you know how it was just to be careful that I just don’t know what to say I don’t know what you can tell him I just don’t

        • LostXOR@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          21 hours ago

          An alternative argument: Water generally makes things “wet” due to it forming hydrogen bonds with said things. Water also readily forms hydrogen bonds with itself. Therefore, water is wet.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    1 day ago

    Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?

    …no.

    So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.

    But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.

    Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.

    This is the idea that"Sentience can’t come from semantics"… More is needed than that.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 day ago

      i like your comment here, just one reflection :

      Thinking is relational, it requires an internal self awareness.

      i think it’s like the chicken and the egg : they both come together … one could try to argue that self-awareness comes from thinking in the fashion of : “i think so i am”