Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Nevoic@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

    I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

    Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

    That’s not something that I’ve bought into yet.

    • sooper_dooper_roofer [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

      How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

      chatGPT : freshman-year-“hello world”-program
      human being : amoeba
      (the : symbol means it’s being analogized to something)

      a human is a sentience made up of trillions of unicellular consciousnesses.
      chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.

      Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will

    • Dirt_Owl [comrade/them, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.

      It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense

      • Nevoic@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.