Summary
Geoffrey Hinton, the “Godfather of AI,” warns of a 10-20% chance that AI could cause human extinction within 30 years, citing the rapid pace of development and the likelihood of creating systems more intelligent than humans.
Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI.
He called for urgent government regulation, arguing that corporate profit motives alone cannot ensure safety.
This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.
Contrary to what the reporting suggests, these views don’t seem to be contradicting each other. Yann LeCun says AI could save humanity, while Geoffrey Hinton says that, in the absence of strong government regulation, AI companies will not develop it safely. Both of these things can be true.