Hey everyone,
Last night a rabbit hole took me to an unexpected place. Gwern is undoubtedly the most comprehensive website adorned with vocabulary, statistics, programming, and a strong bias towards the Haskell language.
I found myself sucked into the site and got lost in what seemed like an endless stream of text. One page about nootropics would lead to another about properly designing scientific studies, the “Dual n back” method for increasing IQ and countless more.
From a previous post I mentioned interest in LLM inferences, but at the time I kind of only nebulously wanted an AI tool better than GPT-4. Some of so kindly brought the Georgi Gerganov Llama.cpp to my attention, for which I finally have adjusted to Linux well enough to feel comfortable downloading software in myriad ways.
Returning to the topic at hand, I have an itching feeling that some sort of ML model could be made to serve the purpose as a brain extension. I can see the uses being for picking up and maintaining technical vocabulary for an interview in pharmaceuticals, chip manufacturing, chemical processing, 3D manufacturing, and legion others.
I imagine it could be an absolute super tool for learning. I mean past the usual Ebbinghaus forgetting curve that Anki seeks to ameliorate, combined with active recall, memory palace techniques, and Anthony Metivier’s lovely curated channel. He led me to Gwern in the first place. His story is very inspiring and I would recommend his book “The Victorious Mind: How to Master Memory, Meditation and Mental Well-Being”.
I think this is a great place to begin discussion on this topic. Given that we are neurodivergent and many of you have also resonated with the monotropic brain theory, what are your thoughts about having a “Brain Inference” or “Brain Buddy”? Here are a couple of questions to chew on:
- What features should a program do that has the Brain Buddy incorporated in it? Could this be analogous to orgmode in Emacs? Some sort of an fzf-esque program to globally search for something you vaguely recall?
- How would we design it? What facets do we need to consider?
- What training sets could we use? How do we clean up the set to ensure the model doesn’t digest falsehoods?
- How large do the models need to be w.r.t. parameters?
- How much computing power would we need?
deleted by creator