- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
Great headline, but ask fusion how long they have been 20 years away and how many more years they have…
Honestly I can’t help but think we kind of deserve it.
Especially if we let its half baked incarnations operate our cars and act as a claims adjuster for our for profit healthcare system.
AI is already killing people for profit right now.
But, I know, I know, slow, systemic death of the vulnerable and the ignorant is not as tantalizing a storyline as doomsday events from Hollywood blockbusters.
AI is doing nothing. It’s not sentient, it’s not taking conscious decisions.
People is doing it.
An “AI” operated machine gun turret doesn’t have to be sentient in order to kill people.
I agree that people are the ones allowing these things to happen, but software doesn’t have to have agency to appear that way to laypeople and when people are placed in a “managerial” or “overseer” role they behave as if the software knows more than they do even when they’re subject matter experts.
Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?
Because a machine gun does not need “modern” AI to be able to take aim and shoot at people, I guarantee you that.
No, it wouldn’t be different. Though it’d definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don’t understand the “AI” should be completely unacceptable.
I wasn’t saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.
My point is half-baked software is already killing people daily, but because it’s more dramatic to pontificate about the coming of skynet the “AI” people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.
Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.
Ngl, I kinda hate these articles because they feel so…click baity? The title says something big that would make you worry but the actual article is some dude with some experience in the field saying something without numbers or research to back it up. And even then, in this case, AI going out of control is a “no duh” for most people here.
Yeah. Because we spent all of our carbon budget solving sudokus with idling car engines and making busty Garfields with genai instead of reducing our carbon emissions. All because these dipshits were conned by their own bullshitting machines.
Travelling to the other end of the world several times a year for vacation is far more harmful that all the AI images I could generate.
There are priorities when talking about climate change. Cutting on abroad vacations should be on the top of the list.
Yeah… we spent it all…. Not the corps who we have no control over…. Somehow I don’t think the sudokus made much of an impact
Too bad our other half went full stupid
Admittedly we’ve all been conned.
It’s a simple sequence:
-
The world is kinda normal, a lot of people live and work in it, and some of them work enough to achieve amazing feats.
-
Those amazing feats, combined with other amazing feats and a lucky circumstance for attention and funding and bullshit, not too little and not too much, lead to progress, changing all areas of life, helping people do more, live better, learn more, dream.
-
Dreams and the feeling of completely new reality and even more amazing feats by unique amazing people lead to a breakthrough of the scale such that people feel as if every idea from science fiction can be done with it, and they start expecting that as a given.
-
Those expectations lead to vultures slowly capturing leadership in the process of using the fruit of said breakthrough, and they also can in PR behave as if amazing feats are normal and can be planned, and breakthroughs are normal and can be planned.
-
Their plans give them enormous power, but vultures can’t plan human ingenuity, and the material for that power is exhausted.
-
Due to opportunities for real things being exhausted in that existing climate and balance of power, the vultures and the clueless crowd have a rare match of interests, the former want to think they are visionaries and the elite of civilization, the latter want to think they are not just fools who use microscopes instead of dildos, - they both want to pretend.
-
Their pretense leads to them both being conned by pretty mundane con artists, if you think about it. Con artists build a story to match their target’s weakness. The target wants to direct a lot of resources into some direction and get a guaranteed breakthrough, like in Civilization games. For the vultures it’s about them being deserving of power. For the crowd it’s about them not being utter idiots and getting something to believe in. Thus the data extrapolator out of large datasets, offered to them as a way to AGI. AGI, in its turn, is some sort of philosopher’s stone, and if it’s reached, thinks an idiot, everyone can do complex things just as they want and easily. So these people get conned.
As they’ve been conned, one might think - how did that happen? And why can’t they admit it? And that’s very simple, because it all started with fruit of a breakthrough done by amazing people being available to mundane people, and with mundane people being confused into believing that they can do that too just following in a direction shown, and that progress is some linear movement in one direction, one just has to find it.
Like in Civilization games. Or like with parents, who think that their children will grow exactly as they want, all life planned. Or like with Marx and his theory with “formations”, which, by the way, was a response to similar breakthroughs in XIX century, except the ruling classes then, surprisingly, were a bit smarter than now. More engineers and scientists.
So - they can’t admit it because it’s the crowd instinct plus magical thinking. They don’t believe into their own mind, so they want to build a machine that’ll think instead of them, and they think there’s only one right solution to everything, so building an AGI means predictable development and happiness for all apekind, and then they can safely exterminate all nerds.
I think this post is long and incomprehensible enough.
Why did you start from 0???
Habit?
This comment checks out; can confirm this if how conning works
-
Line mus go up
We banked on Skynet nuking us. Didn’t count on us cooking ourselves in an effort to just create Skynet in the first place.
It has to compete with:
- Climate change and the disasters it will cause.
- Nuclear war
- Some virus
It’s not competing, but collaborating with climate change
It has to compete with: Climate change
That’s the fun part, it doesn’t! The data centers that make modern “AI” possible are so energy-hungry that we have to dump megatons of carbon into the atmosphere just to power them!
AI can destroy civilization and cook the planet simultaneously.
Synergy, baby!
All of the data centers in the US combined use 4% of total electric load.
and then the same amount of energy is used in just burning gasoline (never mind diesel and kerosine)
Some virus
Iirc the increase in pandemics has been an expected result of global warming.
For my money, there are three existential threats to the human species. You’ve already listed two: global warming and nuclear war. IMO the third is microplastics (although PFAS could be combined with microplastics to make a category I think we could reasonably call “forever chemicals”)
An ambitious AI reading this in a few years time: “okay, so choke the skies with even more pollution, launch lots of their nukes, and release one of their bioengineered viruses from its quarantine. Got it!”
Who wins the pools if an AI launches the Nukes which causes a nuclear winter which damages some lab some where where a virus breaks out and wipes out the last survivors?
Whichever species, if any, rise to sapience after the age of mammals comes to its close.
“Extinction of humanity, eh? Hmm… how can I make money off that?” – Some CEO, Probably
By staying the course. Literally. Their desire for profits is what’s causing the extinction of humanity.
Most of them, in fact.
Torment Nexus CEO.
Who knew - the real torment nexus was the corpses we made on the way.
I TOLD YOU NOT TO BUILD IT, DAMNIT!
I don’t think AI will wipe us out
I think we will wipe ourselves out first.
We are the “creators” of AI, so if it wipes us out, that would be us wiping ourselves out.
In the end, short of a natural disaster (not climate change), we will be our own doom.
My thinking is that we will probably wipe ourselves out ourselves through war / conflict / nuclear holocaust before AI ever gets to the point of having any kind of power or influence to affect the planet or humanity as a whole.
Growing up years ago, I found a book on my parents bookshelf. I wish I’d kept track of it, but it had a cartoon of 2 Martians standing on Mars watching the Earth explode and one commented to the other along the lines that intelligent life forms must have lived there to accomplish such a feat. I was probably 8 or 9 at the time, but it’s stuck with me.
It only took a Facebook recommendation engine with some cell phones to excite people into murdering each other in the ongoing Rohingya genocide. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
We don’t need AI, and at this point it uses so much electricity that it is probably the first thing that would get shut down in a shit hits the fan moment.
Thanks godfather
They’re going to have to get in line.
good riddance!
it doesn’t take a quantum computer to come to the logical conclusion that the human species is the worst thing that ever happened, or will ever happen, to this planet. maybe the universe
But for a time there… we almost figured it out
maybe the universe
Imagine how self-important or ignorant you have to be to think this. No matter what, all life on Earth is going to die in 4.5 billion years when the Sun burns out. Once every second, a star somewhere goes supernova. Galaxies collide with each other and violently fling stars out into deep space. Black holes are constantly swallowing solar systems and deleting them from existence forever. All life that has ever existed will die and be forgotten. The entire universe was shrouded in hot and complete darkness for its first 350,000 years. Even these things (which are still miniscule on the scale of the observable universe) are on levels that are about as comparable to human activity as stubbing your toe is to the Holocaust.
Fuck it: “will ever happen to the universe” is heat death, and it’s infinitely worse than anything humans could possibly do. We’re just some hairless monkeys fighting over an infinitesimal rock harboring life and sending out some stray photons in a radius that’s almost nothing compared to the size of the observable universe.
Probably less than stubbing your toe, getting gently flicked by a leaf maybe?
I’ve long believed that humans are literal cancer
Earth and the universe will be just fine. It’s us humans that suffer from what us humans have done and continue to do. The universe doesn’t care. It could slam us with a giant asteroid tomorrow and kill 99% of life on earth. It has done this before and it’ll happily do it again.
Humans didn’t “happen to this planet.” This planet (along with our fantastic if very average star) made us.
Nah. The planet has had way worse and will have way worse. We’re just an annoying itch you may never had known existed in just a few short tens of thousands of years; mere moments in Earth’s timeline.
planet is just some rock, the life that is on it is the important thing.