- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Yes. Which is why I hope Western big tech keeps leaning into it as hard as they can. A bunch of big companies killing themselves on pure hype is a great way to speed the collapse of Western capitalism and imperialism.
The current system of AI stuff are mostly good for summarizing a ton of articles at once, translating, and filler for artists. Just the other day I was driving somewhere and had to find a parking spot that was cheap/free and had duckduckgo’s chatbot summarize 30 enshittified search results for me which led me to a real free parking space. All the apps and sites for the nearby area are basically worthless for listing the actual pricing in a quick way. As for art, its never particularly good at making very complex things unless you get a good roll of the dice or have a dozen extra systems built over it, but its great at filler.
As for translating, I have some Chinese friends and its fun to freak them out by emulating dialects from certain regions they’re from with qwen’s llms.
If youre using it for things like important manuals and so on, you’re essentially just programming a search engine that detects natural language and outputs RAG based on keywords, which can be done in a much less resource intensive way than an LLM.
I really think that AI will eventually be modeled after the human brain in a broad sense, and this type of generative AI will be sort of the speech component of the brain. You can’t just babble and hope to be accurate by chance.
and this type of generative AI will be sort of the speech component of the brain
This is my take as well. Like it’s good at parsing and predicting text and could be an invaluable component of some more comprehensive sort of AI where it’s serving as this translation layer between it and people. More broadly, the same math involved seems to be applicable for a broader set of data parsing tasks which is neat, like how machine learning is being used to create a sort of OCR for imaging systems that can penetrate into carbonized scrolls so it turns inscrutable charts that only a single digit number of people are capable of interpreting into images that a much larger number of grad students with the relevant language knowledge can study.
But “maybe if we keep making the language predictor bigger it’ll eventually become god by correctly predicting all text all the time!” is an absurd dead end that’s being hyped up by scammers.