This is an automated archive.
The original was posted on /r/singularity by /u/scorpion0511 on 2024-01-24 11:47:24+00:00.
I’ve had a shift in perspective on the trajectory of AI. I think the current path of improving language models might be a dead-end.
Instead, exploring avenues like Karl Friston’s concept of Active Inference, mimicking the way our brains function would be much interesting. This approach could be a game changer, requiring as much energy as our brains while constantly updating its “model of the world” based on experiences. LLMs don’t have model of the world.
I recommend checking out “Why Greatness Cannot be Planned” by Kenneth Stanley, where he challenges the idea that sticking rigidly to objectives, such as achieving AGI, may mislead us. Stanley suggests focusing on paths of novelty and interestingness, citing historical examples like the invention of vacuum tubes not initially intended for computing. The key is to explore diverse possibilities, as some might lead to unexpected but groundbreaking advancements.