- cross-posted to:
- becomeme@sh.itjust.works
- cross-posted to:
- becomeme@sh.itjust.works
The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement
Better hardware isn’t going to change anything except scale if the underlying approach stays the same. LLMs are not intelligent, they’re just guessing a bunch of words that are statistically most likely to satisfy the user’s request based on their training data. They don’t actually understand what they’re saying.