Eerke Boiten, Professor of Cyber Security at De Montfort University Leicester, explains his belief that current AI should not be used for serious applications.
I really think that AI will eventually be modeled after the human brain in a broad sense, and this type of generative AI will be sort of the speech component of the brain. You can’t just babble and hope to be accurate by chance.
and this type of generative AI will be sort of the speech component of the brain
This is my take as well. Like it’s good at parsing and predicting text and could be an invaluable component of some more comprehensive sort of AI where it’s serving as this translation layer between it and people. More broadly, the same math involved seems to be applicable for a broader set of data parsing tasks which is neat, like how machine learning is being used to create a sort of OCR for imaging systems that can penetrate into carbonized scrolls so it turns inscrutable charts that only a single digit number of people are capable of interpreting into images that a much larger number of grad students with the relevant language knowledge can study.
But “maybe if we keep making the language predictor bigger it’ll eventually become god by correctly predicting all text all the time!” is an absurd dead end that’s being hyped up by scammers.
I really think that AI will eventually be modeled after the human brain in a broad sense, and this type of generative AI will be sort of the speech component of the brain. You can’t just babble and hope to be accurate by chance.
This is my take as well. Like it’s good at parsing and predicting text and could be an invaluable component of some more comprehensive sort of AI where it’s serving as this translation layer between it and people. More broadly, the same math involved seems to be applicable for a broader set of data parsing tasks which is neat, like how machine learning is being used to create a sort of OCR for imaging systems that can penetrate into carbonized scrolls so it turns inscrutable charts that only a single digit number of people are capable of interpreting into images that a much larger number of grad students with the relevant language knowledge can study.
But “maybe if we keep making the language predictor bigger it’ll eventually become god by correctly predicting all text all the time!” is an absurd dead end that’s being hyped up by scammers.