My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?
And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.
My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?
And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.