An elegant way to make someone feel ashamed for using many smart words, ha-ha.
I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).
The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.
Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.
We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
Unintentional I assure you.
I think it’s some social mechanism making them choose a brute force solution first.
I feel like it’s simpler than that. Ye olde “when all you have is a hammer, everything’s a nail”. Or in this case, when you’ve built the most complex hammer in history, you want everything to be a nail.
So I’d say commercially they already are successful.
Definitely. I’ll never write another cover letter. In their use-case, they’re solid.
but I haven’t even finished my BS yet
Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.
An elegant way to make someone feel ashamed for using many smart words, ha-ha.
The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.
And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.
We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.
Unintentional I assure you.
I feel like it’s simpler than that. Ye olde “when all you have is a hammer, everything’s a nail”. Or in this case, when you’ve built the most complex hammer in history, you want everything to be a nail.
Definitely. I’ll never write another cover letter. In their use-case, they’re solid.
Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.