Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Have I lost it

      Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”

    • Nevoic@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

      I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

      Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

      That’s not something that I’ve bought into yet.

      • sooper_dooper_roofer [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

        How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

        chatGPT : freshman-year-“hello world”-program
        human being : amoeba
        (the : symbol means it’s being analogized to something)

        a human is a sentience made up of trillions of unicellular consciousnesses.
        chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.

        Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will

      • Dirt_Owl [comrade/them, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.

        It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense

        • Nevoic@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.

  • AmarkuntheGatherer@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The half serious jokes about sentient AI, made by dumb animals on reddit are no closer to the mark than an attempt to piss on the sun. AI can’t be advancing at a pace greater than we think, unless we think it’s not advancing at all. There is no god damn AI. It’s a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it’s got no grasp on anything, no comprehension, let alone a promise of sentience.

    There are plenty of stuff and people that get to me, but few are as good at it as idiot tech bros, their delusions and their extremely warped perspective.

  • janny [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Complete nonsense.

    As we all know, many idol-worshiping peoples have encountered gnomes and through worshiping and offering tribute to these gnomes, these gnomes become the hosts for powerful dark gods who reward their followers generously but are known to be fickle and demanding.

    Silicon Valley is infamous for it’s bizzare polycules and their ottoman haramesque power struggles. Somehow, Sam Altman offended Aella_Girl’s polycule who happened to control the board of open AI.

    Aella_girl is openly in sexual concourse with a series of garden gnomes who she likely worships and has married as it is known that gnomes usually demand a wife or your first born child.

    Proof: https://cashmeremag.com/reddit-gonewild-aella-gnome-cam-53817/

    She likely used the powers of this dark god to remove Sam Altman from the board but likely failed to meet it’s escalating demands or otherwise disappointed this entity and as a result failed to remove him. It is known that when disappoints a gnome or stops worshiping it that one’s fortunes will fall into a rapid decline so if this happens to her then we know what likely happened. Either that or Sam Altman is also in contact with a dark entity of some sort.

  • Monk3brain3 [any, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Whenever the tech industry needs a boost some new bullshit comes up. Crypto, self driving and now AI, which is literally called AI for marketing purposes, but is basically an advanced algorithm.

  • MerryChristmas [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating as any of the bazinga takes on reddit-logo. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.

    The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don’t look for ways to utilize, subvert and counter these technologies while they’re still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

    Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

    • Wheaties [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      As it stands, the capitalists already have the old means of information warfare – this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text – but with filters installed by communists, rather than the PR arm of a company? That won’t be nearly as convincing as just talking and organizing with people in real life.

      Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it’s still just a programme on a server. Computers are very, very fragile. I’m just not too worried about it.

    • GreenTeaRedFlag [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn’t be using this for the same reason you don’t use a gag dictionary for a research project. Grow up

      • oregoncom [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It has potential for making propaganda. Automated astroturfing more sophisticated than what we currently see being done on Reddit.

        • GreenTeaRedFlag [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          astroturfing only works when your views tie into the mainstream narrative. Besides, there’s no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Oh my god it’s this post again.

      No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.

      And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

      Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.

  • Justice@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I said it at the time when chatGPT came along, and I’ll say it now and keep saying it until or unless the android army is built which executes me:

    ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

    I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some “real” argument for different types and stages of AI and my only preemptive response to them is basically “keep your industry specific terminology inside your specific industries.” The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because… Frankly, they’re full of shit and it’s annoying.