Have they tried replacing their workers with AI to save money?
Now that capital has integrated them into their system they will not be allowed to fail. At least for now.
The iron law of “nothing ever happens” necessitates this
spoiler
Nah but for real how much life can this bubble still have left?
A lot, because nothing ever happens.
The iron law of “nothing ever happens”
There are decades where nothing ever happens, and there are weeks where we are so back
Almost like quantitative changes turn into qualitative changes or something
They rupture even
Or Microsoft and Meta will make sure there’s less competition in the future for their own LLMs?
It seems like MS could really fuck them up if they stopped using OpenAI for all their azure stuff. As of now I don’t think MS relies on their own LLM for anything?
MS abandons basically anything new that doesn’t make them even more absurdly rich instantly these days.
Game pass and Xbox aren’t profitable but maybe they’re trying to change that now.
Yea but Xbox division is like 2 decades old so they’re probably afraid of shit canning the whole thing. It is likely that Spencer goes down with the ship though after failing to achieve anything beyond securing Xbox’s position in third place so firmly and distantly that the only remaining option is Microsoft just buying Sony completely to get the market the hard way. They’ll wait until Kamala replaces Lena Kahn at the FTC first though.
They are choosing to create as many layers of separation as possible. While integrating the systems directly. This is just how global capital is currently ran.
Good, please take the entire fake industry with you
No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons
I do think that if OpenAI goes bust that’s gonna trigger a market panic that’s gonna end the hype cycle.
Inshallah I am fed up of dealing with these charlatans at work
A solution in search of a problem
I just know the AI hype guys in my dept are gonna get promoted and I’ll be the one answering why our Azure costs are astronomical while we have not changed our portfolio size at all lol
deleted by creator
AI hype guys? yeah sorry AI can do that for us now
My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.
I think that’s the best argument for why the tech industry won’t let that happen. All of the big tech stocks are getting a boost from this massive grift.
Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.
It’s certainly possible, but I don’t think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta’s csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they’re generally behind everyone else across all ML products but especially LLMs, but afaik they’re bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.
I believe that Microsoft owns a huge portion of OpenAI, like just short of majority stake
yeah I think that’s very plausible
deleted by creator
Inshallah
I hate when people say ‘LLMs have legitimate uses but…’. NO! THEY DONT! Its entirely a platform for building scams! It should be burnt to the ground entirely
But then how will people write 20 cover letters a day to keep up with the increasing rate of instant rejections?
Saw a really depressing ad at work the other day where Google was advertising their thing and it was some person asking their LLM to write a letter for their daughter to this athlete bragging about how she’ll break her record one day. They couch it in “here’s a draft” but it’s just so bleak. The idea that a child so excited about doing a sport and dreaming of going to the Olympics and getting a world record can’t just write a bit of a clumsy letter expressing themselves to their hero is just beyond depressing. Writing swill for automated systems that are going to reject you anyway is one thing, but the idea that they think that this is a legitimate use of these models just highlights how obnoxiously out of touch they are.
How do we learn and grow as people and find our own writing voices if we don’t write some of the most cringe shit imaginable when we’re young. I wrote a weird letter to Emma Watson in middle school, nobody ever read it, but it was a learning experience and made me actually have to think about my own feelings. These techbros have to have been grown in vats.
I’ve hesitated to ever write anything about it thinking it’d come across as too or Luddite, but this comment kind of inspired me to flesh out something that’s been simmering in the back of my head ever since LLMs became latest fad after the NFT boom.
One of the most unnerving things to me about “AI” in the common understanding is that its entire hype cycle and main use cases are all tacit admissions that all of the professional and academic uses of it are proof that their pre-“AI” standards were perfunctory hoop jumping bullshit to join the professional managerial class, and their “artistic” uses are almost entirely utilized by people with zero artistic sensibilities or weirdo porno sickos. All of it belies a deep cynicism about the status quo where what could have been heartfelt but clumsy writing by young students or the athlete in your example are being unknowingly robbed of their agency and the humanizing future of looking back on clunky immature writing as a personal marker of growth. They’re just hoops to jump through to get whatever degree or accolade you’re seeking, with whatever personal growth that those achievements originally meant stripped of anything other than “achieving them is good because it advances your career and earning potential.” Techbros’ most fawning and optimistic pitches of “AI” and “The Singularity” instead read to me as the grimmest and most alienating version of neoliberal “end of history” horseshit where even art and language themselves are reduced to SEO marketized min/maxxed rat races.
I hope this doesn’t sound too but I had to get that rant out
Maybe I’ll expand that into something
no I thought it was on point, just don’t use substack or wordpress if you start blogging we have better options now
I’m barely better than when it comes to tech literacy, what’s the best platform for stuff like that? Is Medium bad? I’ve installed a Linux distro before but basically just want to rant and take pictures of my cats
Medium is also “bad” but it does put your posts out into an algorithm. Kiiiind of. Everyone ends up deleting them. I just use long Mastodon/forks posts (I make a lot of accounts on every activitypub server tbqh just for gimmicks and things) but 5,000 is not a lot of characters so I also link to Firefish Pages. Easy to make server with 50,000 for personal use at least which is a bit better for longform.
I think the best option on ActivityPub is https://writefreely.org/ you can also generally find a way to seamlessly retweet things like Lemmy linkposts or Writefreely blogs on Mastodon/Firefish whatever fork
Not very technically savvy at all over here it’s just pretty online
Thanks, duly noted
Luv 2 scream into the void online
I really like Firefish Pages because they have sexy embeds for Mastodon and its forks’ posts and Writefreely as well haha, because it is a Mastodon fork. So I can spam a bunch of tweets over a months and then cite them all in a Page.
They’re all going to make you repost them somewhere else anyways. Once you break into AcitivityPub posting it’s very good, just pretty hard to dodge the whole Ukraine net, so it’s not bad itself. But that’s not even what I’m saying, you can always post from Writefreely or Lemmy back to Reddit or Twitter or whatever and it ensures your real post won’t be deleted. You keep control of part of your data on something self hosted or friend hosted on a cloud service.
I found Substack’s editor impossible to paste into BTW and had other technical issues. But Medium and Substack do kiiind of offer some social media opportunities themselves. But they mostly promote dumb crap and Taibbi respectively
So the emotional resonance I felt when I asked ChatGPT to write me a song about my experiences still loving the parent that abused me was what to you?
Like the results were objectively artless glurge of course but I needed that in that moment.
deleted by creator
I mean this is exactly part of the reason they’re going bankrupt which is good so you should keep doing it. Companies have been using other forms of AI with some success whereas LLM just regurgitates too much random fake information for anyone serious to use professionally.
If it goes under, use open source LLMs which have been steadily improving and almost surpassing proprietary ones.
You should recognize that all the artistic, aesthetic, and emotional work being done here was done by you.
I promise this isn’t true. AI is absolutely a scam in the sense that it’s overhype as fuck, but LLMs are frequently of practical use to me when doing basically anything technical. It has helped me solve real-life problems that actually materially helps others.
As a software developer with close to 30 years of experience, I find it continually astonishing when people say LLMs are useful to them for technical stuff. I already spend too much of my life debugging code I didn’t write. I don’t need to automatically churn out more technical debt to be responsible for!
I don’t work in actual software development, though I do a little of it amongst other work.
When I need to slop out a one-time snippet or short script to do something, which I have to do like 10 times a day, it takes me like 3-20 minutes. ChatGPT 4 does it near-perfectly, takes one minute, and usually teaches me something on the way.
Plus when I need to work out how the fuck GDB works to debug shit, it’s an absolute lifesaver. The manual is very long and remembering all the memory examination commands is hard.
If you’re ever working on code over ~100 lines a long, then I basically agree as it takes massive debugging and is poorly factored to the point of being worthless. But for arcane, well-documented commands (ie obscure programming languages and linux tools), and short blasts of code, it’s genuinely incredibly useful on a daily basis.
the only software developer thing chatgpt does well exceptionally well is 101 level answers to general questions/requests and read to me paraphrased stackoverflow results with nearly google levels of reliability.
Where chatgpt really 100x’s a person’s output is when you’re trying to generate shitloads of spam text, such as automated posting of unique comments that use the post/thread/blog/videos’s context and existing comments as context to appear relevant while still pushing a narrative or shilling a product, or building a proxy such that every page someone visits on your website, you automatically reword (plagiarize) another specific website’s article then add your ads.
Idk you probably sound like people did when search engines first started getting popular. If you can’t learn how to get good output from an LLM you might get left behind. I never use LLMs for large chunks of code just snippets and it’s great for that. It’s just like StackOverflow. Don’t blindly copy shit without understanding what’s actually going on. You have fun writing boilerplate code I’m never going back to hand writing that shit.
You sound very insecure, kid.
You sound old. Enjoy 30 more years of writing boilerplate.
Here’s a nickel, kid, buy yourself a language that eliminates boilerplate.
It’s good for occasional questions to avoid IRC assholes. It’s also interesting technology and isn’t super helpful now but it would be interesting to see future developments.
LLMs are useful when you don’t know what terms to put in a search engine
well, we solved that exploit by enshittifying search engines
1 trillion more parameters just a trillion more parameters bro i swear we’ll be profitable then bro
Lol
Lmao even
As far as “AI” goes, it’s here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.
I agree that this tech has lots of legitimate uses, and it’s actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.
Yeah but integrating LLMs with other systems is already happening.
Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there’s still a lot of space for improvement.
Sure, but you can do a lot more than that. You could combine LLMs as part of a bigger system of different kinds agents, each specializing in different things. Similarly to the way different parts of the brain focus on solving different types problems. Sort of along the lines of what this article is describing https://archive.ph/odeBU
It’s kind of like how graphics cards are used to optimize specific repeated computations but not used for general computation
Good analogy, it’s a tool for solving a fairly narrow problem in a particular domain.
deleted by creator
Nature is healing.
and nothing of value is at risk of being lost
good
big holders with insider information change to short positions to make money during the crash by putting their shares up as collateral to investment banks in exchange for loans, the bubble bursts, smaller investors lose money, the government steps in and bails them out because they’re “too big to fail” the torment nexus continues humming along
Is this because AI LLMs don’t do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they’re a conversational version of a Google search. I haven’t seen a single thing they do that a person would need or want.
Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?
I think there are legitimate uses for this tech, but they’re pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can’t be guaranteed to produce reasonably correct results then it’s not really improving productivity in a meaningful way.
I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it’s outputting really doesn’t matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.
We’ll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.
AI is great for asking questions, not answering them
We might eventually get to a point where LLMs are a useful conversational user interface for systems that are actually intrinsically useful, like expert systems, but it will still be hard to justify their energy cost for such a trivial benefit.
The costs of operation aren’t intrinsic though. There is a lot of progress in bringing computational costs down already, and I imagine we’ll see a lot more of that happening going forward. Here’s one example of a new technique resulting in cost reductions of over 85% https://lmsys.org/blog/2024-07-01-routellm/
deleted by creator
Is there a single LLM you can’t game into apologizing for saying something factual then correcting itself with a completely made up result lol
I’ve been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.
To clarify I’m not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals’ normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.
Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don’t read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn’t even trying to entertain me. It’s worse than reading marketing emails because at least those have a purpose.
It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it’s more unhinged and buggy.
Make the villagers petty assholes like the original game and RETVRN the crabby personality type and it would be an improvement.
The LLM characters will send you on a quest, and then you’ll go do it, and then you’ll come back and they won’t know you did it and won’t be able to give you a reward, because the game doesn’t know the LLM made up a quest, and doesn’t have a way to detect that you completed the thing that was made up.
Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Thank you. This is a good article. Are there any good book length things I could read on this topic?
I do not know. Perhaps Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell.
it’s because chatgpt didn’t say enough slurs
The thing that isn’t really mentioned here is that the largest OpenAI investor is Microsoft, and most of the money OpenAI spends is on Microsoft cloud services. So basically OpenAI is an internal Microsoft capital investment. They won’t let it fail, but they might kill it if it loses money for long enough.
Right, if they keep bleeding money on it, then the shareholders will eventually start demanding they shut it down.
MS has more leeway for loss leading capital investment than maybe any other company on earth
Question is for how long though. BRICS just introduced their own settlement currency, and I expect that the dollar based economy will start shrinking rapidly as a result of that.
These tech giants are extremely fundamental to the operation of the empire. The US will battle to keep them alive as long as possible.
I agree, but the specific flavor of snake oil can change over time. For example, LLMs are replacing the whole blockchain bananza, and we might see some new miracle tech show up in a couple of years as the shine starts wearing off the AI hype. If the economic conditions get tougher, then there will be more push to focus on tech that actually generates revenue as well.