Great headline, but ask fusion how long they have been 20 years away and how many more years they have…

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    7 hours ago

    Especially if we let its half baked incarnations operate our cars and act as a claims adjuster for our for profit healthcare system.

    AI is already killing people for profit right now.

    But, I know, I know, slow, systemic death of the vulnerable and the ignorant is not as tantalizing a storyline as doomsday events from Hollywood blockbusters.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        An “AI” operated machine gun turret doesn’t have to be sentient in order to kill people.

        I agree that people are the ones allowing these things to happen, but software doesn’t have to have agency to appear that way to laypeople and when people are placed in a “managerial” or “overseer” role they behave as if the software knows more than they do even when they’re subject matter experts.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          47 minutes ago

          Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?

          Because a machine gun does not need “modern” AI to be able to take aim and shoot at people, I guarantee you that.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            24 minutes ago

            No, it wouldn’t be different. Though it’d definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don’t understand the “AI” should be completely unacceptable.

            I wasn’t saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.

            My point is half-baked software is already killing people daily, but because it’s more dramatic to pontificate about the coming of skynet the “AI” people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.

            Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.

  • kritzkrieg@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    16 hours ago

    Ngl, I kinda hate these articles because they feel so…click baity? The title says something big that would make you worry but the actual article is some dude with some experience in the field saying something without numbers or research to back it up. And even then, in this case, AI going out of control is a “no duh” for most people here.

  • YtA4QCam2A9j7EfTgHrH@infosec.pub
    link
    fedilink
    English
    arrow-up
    131
    arrow-down
    7
    ·
    2 days ago

    Yeah. Because we spent all of our carbon budget solving sudokus with idling car engines and making busty Garfields with genai instead of reducing our carbon emissions. All because these dipshits were conned by their own bullshitting machines.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      Travelling to the other end of the world several times a year for vacation is far more harmful that all the AI images I could generate.

      There are priorities when talking about climate change. Cutting on abroad vacations should be on the top of the list.

    • Fades@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      1 day ago

      Yeah… we spent it all…. Not the corps who we have no control over…. Somehow I don’t think the sudokus made much of an impact

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 day ago

      Admittedly we’ve all been conned.

      It’s a simple sequence:

      1. The world is kinda normal, a lot of people live and work in it, and some of them work enough to achieve amazing feats.

      2. Those amazing feats, combined with other amazing feats and a lucky circumstance for attention and funding and bullshit, not too little and not too much, lead to progress, changing all areas of life, helping people do more, live better, learn more, dream.

      3. Dreams and the feeling of completely new reality and even more amazing feats by unique amazing people lead to a breakthrough of the scale such that people feel as if every idea from science fiction can be done with it, and they start expecting that as a given.

      4. Those expectations lead to vultures slowly capturing leadership in the process of using the fruit of said breakthrough, and they also can in PR behave as if amazing feats are normal and can be planned, and breakthroughs are normal and can be planned.

      5. Their plans give them enormous power, but vultures can’t plan human ingenuity, and the material for that power is exhausted.

      6. Due to opportunities for real things being exhausted in that existing climate and balance of power, the vultures and the clueless crowd have a rare match of interests, the former want to think they are visionaries and the elite of civilization, the latter want to think they are not just fools who use microscopes instead of dildos, - they both want to pretend.

      7. Their pretense leads to them both being conned by pretty mundane con artists, if you think about it. Con artists build a story to match their target’s weakness. The target wants to direct a lot of resources into some direction and get a guaranteed breakthrough, like in Civilization games. For the vultures it’s about them being deserving of power. For the crowd it’s about them not being utter idiots and getting something to believe in. Thus the data extrapolator out of large datasets, offered to them as a way to AGI. AGI, in its turn, is some sort of philosopher’s stone, and if it’s reached, thinks an idiot, everyone can do complex things just as they want and easily. So these people get conned.

      As they’ve been conned, one might think - how did that happen? And why can’t they admit it? And that’s very simple, because it all started with fruit of a breakthrough done by amazing people being available to mundane people, and with mundane people being confused into believing that they can do that too just following in a direction shown, and that progress is some linear movement in one direction, one just has to find it.

      Like in Civilization games. Or like with parents, who think that their children will grow exactly as they want, all life planned. Or like with Marx and his theory with “formations”, which, by the way, was a response to similar breakthroughs in XIX century, except the ruling classes then, surprisingly, were a bit smarter than now. More engineers and scientists.

      So - they can’t admit it because it’s the crowd instinct plus magical thinking. They don’t believe into their own mind, so they want to build a machine that’ll think instead of them, and they think there’s only one right solution to everything, so building an AGI means predictable development and happiness for all apekind, and then they can safely exterminate all nerds.

      I think this post is long and incomprehensible enough.

  • nous@programming.dev
    link
    fedilink
    English
    arrow-up
    62
    ·
    2 days ago

    It has to compete with:

    • Climate change and the disasters it will cause.
    • Nuclear war
    • Some virus
    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      It has to compete with: Climate change

      That’s the fun part, it doesn’t! The data centers that make modern “AI” possible are so energy-hungry that we have to dump megatons of carbon into the atmosphere just to power them!

      AI can destroy civilization and cook the planet simultaneously.

      Synergy, baby!

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          and then the same amount of energy is used in just burning gasoline (never mind diesel and kerosine)

    • minnow@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 days ago

      Some virus

      Iirc the increase in pandemics has been an expected result of global warming.

      For my money, there are three existential threats to the human species. You’ve already listed two: global warming and nuclear war. IMO the third is microplastics (although PFAS could be combined with microplastics to make a category I think we could reasonably call “forever chemicals”)

    • ThePowerOfGeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      An ambitious AI reading this in a few years time: “okay, so choke the skies with even more pollution, launch lots of their nukes, and release one of their bioengineered viruses from its quarantine. Got it!”

      • nous@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Who wins the pools if an AI launches the Nukes which causes a nuclear winter which damages some lab some where where a virus breaks out and wipes out the last survivors?

        • derek@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          Whichever species, if any, rise to sapience after the age of mammals comes to its close.

  • Sundray@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    2 days ago

    “Extinction of humanity, eh? Hmm… how can I make money off that?” – Some CEO, Probably

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    2 days ago

    I don’t think AI will wipe us out

    I think we will wipe ourselves out first.

    • Transient Punk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 days ago

      We are the “creators” of AI, so if it wipes us out, that would be us wiping ourselves out.

      In the end, short of a natural disaster (not climate change), we will be our own doom.

      • IninewCrow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        My thinking is that we will probably wipe ourselves out ourselves through war / conflict / nuclear holocaust before AI ever gets to the point of having any kind of power or influence to affect the planet or humanity as a whole.

    • 7rokhym@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 days ago

      Growing up years ago, I found a book on my parents bookshelf. I wish I’d kept track of it, but it had a cartoon of 2 Martians standing on Mars watching the Earth explode and one commented to the other along the lines that intelligent life forms must have lived there to accomplish such a feat. I was probably 8 or 9 at the time, but it’s stuck with me.

      It only took a Facebook recommendation engine with some cell phones to excite people into murdering each other in the ongoing Rohingya genocide. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

      We don’t need AI, and at this point it uses so much electricity that it is probably the first thing that would get shut down in a shit hits the fan moment.

  • solsangraal@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    19
    ·
    2 days ago

    it doesn’t take a quantum computer to come to the logical conclusion that the human species is the worst thing that ever happened, or will ever happen, to this planet. maybe the universe

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      2 days ago

      maybe the universe

      Imagine how self-important or ignorant you have to be to think this. No matter what, all life on Earth is going to die in 4.5 billion years when the Sun burns out. Once every second, a star somewhere goes supernova. Galaxies collide with each other and violently fling stars out into deep space. Black holes are constantly swallowing solar systems and deleting them from existence forever. All life that has ever existed will die and be forgotten. The entire universe was shrouded in hot and complete darkness for its first 350,000 years. Even these things (which are still miniscule on the scale of the observable universe) are on levels that are about as comparable to human activity as stubbing your toe is to the Holocaust.

      Fuck it: “will ever happen to the universe” is heat death, and it’s infinitely worse than anything humans could possibly do. We’re just some hairless monkeys fighting over an infinitesimal rock harboring life and sending out some stray photons in a radius that’s almost nothing compared to the size of the observable universe.

    • Free_Opinions
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      Earth and the universe will be just fine. It’s us humans that suffer from what us humans have done and continue to do. The universe doesn’t care. It could slam us with a giant asteroid tomorrow and kill 99% of life on earth. It has done this before and it’ll happily do it again.

    • tate@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Humans didn’t “happen to this planet.” This planet (along with our fantastic if very average star) made us.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Nah. The planet has had way worse and will have way worse. We’re just an annoying itch you may never had known existed in just a few short tens of thousands of years; mere moments in Earth’s timeline.