- cross-posted to:
- luddite@lemmy.ml
- cross-posted to:
- luddite@lemmy.ml
Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world. The rank-and-file developers at these companies, in their naivete, do not see that distinction…So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.
I mean, the same can be said for your own senses. “You” are actually just a couple of kilograms of pink jelly sealed in a bone shell, being stimulated by nerves that lead out to who knows what. Most likely your senses are giving you a reasonably accurate view of the world outside but who can really tell for sure?
Don’t let the perfect be the enemy of the good. If an LLM is able to get asymptotically close to accurate (for whatever measure of “accurate” you happen to be using) then that’s really super darned good. Probably even good enough. You wouldn’t throw out an AI translator or artist or writer just because there’s one human out there that’s “better” than it.
AI doesn’t need to be “complete” for it to be incredible.
I do sorta see the argument. We don’t fully see with our eyes, we also see with our mind. So the LLM is learning about how we see the world. Like a scanner darkly hehe.
Not really sure how big of a deal this is it or even if it is a problem. I need to know what the subjective taste of a recipe is, not the raw data of what it is physically.