• Echo Dot
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    7 months ago

    I disagree with the “always” bit. At some point in the future AI is actually going to get to the point where we can basically just leave it to it, and not have to worry.

    But I do agree that we are not there yet. And that we need to stop pretending that we are.

    Having said that my company uses AI for a lot of business critical tasks and we haven’t gone bankrupt yet, of course that’s not quite the same as saying that a human wouldn’t have done it better. Perhaps we’re spending more money than we need to because of the AI, who knows?

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      7 months ago

      …Nnnnno, actually always.

      The current models that are in use now (and the subject of the article) are not actual AI’s. There is no thinking going on in there. They are statistical language models that are literally incapable of producing anything that was not originally part of their training input data, reassembled and strung together different ways. These LLM models can’t actually generate new content, they can’t think up anything novel, and of course they can’t actually think at all. They are completely at the mercy of whatever garbage is fed into them and are by definition not capable of actually “understanding” their output because they are not capable of understanding at all. The nature of these processes being a statistical model also means that the output is to some extent always dependent on an internal dice roll as well, and the possibility of rolling snake eyes is always there no matter how clever or well tuned the algorithm is.

      This is not to say humans are infallible, either, but at least we are conceptually capable of understanding when and more importantly how we got something wrong when called on it. We are also capable of researching sources and weighing the validity of different sources and/or claims, which an LLM is not – not without human intervention, anyway, which loops back to my original point about doing the work yourself in the first place. An LLM cannot determine if a published sequence of words is bogus. It can of course string together a new combination of words in a syntactically valid manner that can be read and will make sense, but the truth of the constructed text cannot actually be determined programmatically. So in any application where accuracy is necessary, it is downright required to thoroughly review 100% of the machine output to verify that it is factual and correct. For anyone capable of doing that without smoke coming out of their own ears, it is then trivial to take the next step and just reproduce what the machine did for you. Yes, you may as well have just done it yourself. The only real advantage the machine has is that it can type faster than you and it never needs more coffee.

      The only way to cast off these limitations would be to develop an entirely new real AI model that is genuinely capable of understanding the meaning of both its input and output, and legitimately capable of drawing new conclusions from its own output also taking into account additional external data when presented with it. And being able to show its work, so to speak, to demonstrate how it arrived at its conclusions to back up their factual validity. This requires throwing away the current LLM models completely – they are a technological dead end. They’re neat, and capable of fooling some of the people some of the time, but on a mathematical level they’re never capable of achieving internally provable, consistent truth.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        7 months ago

        I think people don’t yet grasp that LLMs don’t produce any novel output. If that was the case, considering the amount of knowledge they have, they’d be making incredible new connections and insights that humanity never made before. Instead, they can only explain stuff that was already well documented before.