- cross-posted to:
- ai_@lemmy.world
- cross-posted to:
- ai_@lemmy.world
It’s fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it’s real and dangerous
In aggregate, though, and on average, they’re usually right. It’s not impossible that the tech industry’s planned quarter-trillion dollars of spending on infrastructure to support AI next year will never pay off. But it is a signal that they have already seen something real.
The market is incredibly irrational and massive bubbles happen all the time.
The number of users when all the search engines are forcibly injecting it in every search (and hemorrhaging money to do it)? Just as dumb.
Any thoughts on the paragraph following your excerpt:
The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.
With that in mind, here are some things that AI has done in 2024.
- Cut customer losses from scams in half through proactive detection, according to the Bank of Australia.
- Preserved some of the 200 endangered Indigenous languages spoken in North America.
- Accelerated drug discovery, offering the possibility of breakthrough protections against antibiotic resistance.
- Detected the presence of tuberculosis by listening to a patient’s voice.
- Reproduced an ALS patient’s lost voice.
- Enabled persecuted Venezuelan journalists to resume delivering the news via digital avatars.
- Pieced together fragments of the epic of Gilgamesh, one of the world’s oldest texts.
- Caused hundreds of thousands of people to develop intimate relationships with chatbots.
- Created engaging and surprisingly natural-sounding podcasts out of PDFs.
- Created poetry that participants in a study say they preferred to human-written poetry in a blind test. (This may be because people prefer bad art to good art, but still.)
did you actually just bring that up as a positive?
The author from the article did. It’s a bit of a stretch as are the last 2-3 pieces of the list 🤷🏾♂️. The first few are still pretty big.
Mostly hyping up very simple things?
LLMs don’t add anything vs actively scanning for a handful of basic rules and link scanning. Anything referencing a bank that isn’t on a whitelist of legitimate bank domains in a given country would likely be more effective.
The language stuff is the only parts they’re actually good at.
Chatbots are genuine dogshit, PDF to podcast is genuine dogshit, poetry is genuine dogshit.
Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
They are simple, but they are not easy. Sorting M&Ms according to colour is also a simple task for any human with normal colour vision, but doing it with an Olympic-sized swimming pool full of M&Ms is not easy.
Computers are very good at examining data for patterns, and doing so in exhaustive detail. LLMs can detect patterns of types not visible to previous algorithms (and sometimes screw up royally and detect patterns that aren’t there, or that we want to get rid of even if they exist). That doesn’t make LLMs intelligent, it just makes them good tools for certain purposes. Nearly all of your examples are just applying a pattern that the algorithm has discerned—in bank records, in natural language, in sound samples, or whatever.
As for people being fooled by chatbots, that’s been happening for more than fifty years. The 'bot can be exceedingly primitive, and some people will still believe it’s a person because they want to believe. The fewer obvious mistakes the 'bot makes, the more lonely and vulnerable people will be willing to suspend their disbelief.
Do you have an example of human intelligence that doesn’t rely on pattern recognition through previous experience?
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
At what point do you think that your opinion on AI trumps the papers and studies of researchers in those fields?
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.
I don’t believe that this is the path to actual AI, but not for any of the reasons stated in the article.
The level of energy consumption alone is eye watering and unsustainable. A human can eat a banana and function for a while, in contrast, the current AI offering requirements are now getting dedicated power plants.
lol the entire hope is basically “infinite scaling” despite being way past diminishing returns multiple orders of magnitude ago.
It’s real and it’s dangerous, but it’s also fake and it sucks.
I honestly doubt I would ever pay for this shit. I’ll use it fine but ive noticed actual serious problematic “hallucinations” that shocked the hell out of me to the point i think it has a hopeless signal/noise problem to the point it could never be serially accurate and trusted
I’ve had two useful applications of “AI”.
One is using it to explain programming frameworks, libraries, and language features. In these cases it’s sometimes wrong or outdated, but it’s easy to test and check to make sure if it’s right. Extremely valuable in this case! It basically just sums up what everybody already said, so it’s easier and more on-point than doing a google search.
The other is writing prompts and getting it to make insane videos. In this case all I want is the hallucinations! It makes some stupid insane stuff. But the novelty wears off quick and I just don’t care any more.
I will say the coding shit is good stuff ironically. But I would still have to run the code and make sure its sound. In terms of anythint citation-wise tho, its completely sus af
It has straight up made up damn citations that I could have come up with to escape interrogation during a panned 4th grade presentation to a skeptical audience
But I would still have to run the code and make sure its sound.
Oh I don’t get it to write code for me. I just get it to explain stuff.
I’ve been using AI to troubleshoot/learn after switching from Windows -> Linux 1.5 years ago. It has given me very poor advice occasionally, but it has taught me a lot more valuable info. This is not dissimilar to my experience following tutorials on the internet…
I honestly doubt I would ever pay for this shit.
I understand your perspective. Personally, I think that there’s a chicken/egg situation where free AI versions are a subpar representation that makes skeptics view AI as a whole as over-hyped. OTOH, the people who use the better models experience the benefits first hand, but are seen as AI zealots that are having the wool pulled over there eyes.
At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world.
I think that’s pretty much settled by now. Yes, it will transform the world. And no, the current LLMs won’t ever achieve superintelligence. They have some severe limitations by design. And even worse, we’re already putting in more and more data and compute into training, for less and less gain. It seems we could approach a limit soon. I’d say it’s ruled out that the current approach will extend to human-level or even superintelligence territory.
Is super-intellignence smarter than all humans? I think where we stand now, LLMs are already smarter than the average human while lagging behind experts w/ specialized knowledge, no?
Source: https://trackingai.org/IQ
Isn’t super intelligent more the ability to think so far beyond human limitations that it might as well be magic. The classic example being inventing faster than light drive.
Simply being very intelligent makes it more of an expert system than a super intelligence.
I think superintelligence means smarter than the (single) most intelligent human.
I’ve read these claims, but I’m not convinced. I tested all the ChatGPTs etc, let them write emails for me, summarize, program some software… It’s way faster at generating text/images than me, but I’m sure I’m 40 IQ points more intelligent. Plus it’s kind of narrow what it can do at all. ChatGPT can’t even make me a sandwich or bring coffe. Et cetera. So any comparison with a human has to be on a very small set of tasks anyways, for AI to compete at all.
ChatGPT can’t even make me a sandwich or bring coffe
Well it doesn’t have physical access to reality
it doesn’t have physical access to reality
Which is a severe limitation, isn’t it? First of all it can’t do 99% of what I can do. But I’d also attribute things like being handy to intelligence. And it can’t be handy, since it has no hands. Same for sports/athletics, driving a race car which is at least a learned skill. And it has no sense if time passing. Or which hand movements are part of a process that it has read about. (Operating a coffe machine.) So I’d argue it’s some kind of “book-smart” but not smart in the same way someone is, who actually experienced something.
It’s a bit philosophical. But I’m not sure about distinguishing intelligence and being skillful. If it’s enough to have theoretical knowledge, without the ability to apply it… Wouldn’t an encyclopedia or Wikipedia also be superintelligent? I mean they sure store a lot of knowledge, they just can’t do anything with it, since they’re a book or website…
So I’d say intelligence has something to do with applying things, which ChatGPT can’t in a lot of ways.Ultimately I think this all goes together. But I think it’s currently debated whether you need a body to become intelligent or sentient or anything. I just think intelligence isn’t a very useful concept if you don’t need to be able to apply it to tasks. But I’m sure we’ll get to see the merge of robotics and AI in the next years/decades. And that’ll make this intelligence less narrow.
the most dangerous assumption either camp is making is that AI is and end-solution. Whre 8n fact it’s just a tool. Like invented steam machines they can do a lot more than humans can but they are only ever useful as tools that humans use. Same here AI can have value as a tool to digest large chunks of data and produce some form of analysis providing humans with “another datapoint” but it’s ultimately up to humans to make the decision based on available data.
It’s the latest product that everyone will refuse to pay real money once they figure out how useless and stupid it really is. Same bullshit bubble, new cycle.