My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you’re letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor… Yikes.
The efficient coding hypothesis also helps these exact ai, because it’s being used to justify research into neutral networks and emulating brain function is a huge goal.
My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you’re paranoid about.
Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just “solved” by dumping more energy at the problem.
You need to do more legwork than just saying “substrate independence”, addressed by my organoid thought experiment or “maybe we get Clarke tech or something technology crazy right” which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that’s also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.
Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.
If human brains can do it then it can be done. And it can probably be done better too. I don’t see any reason to assume our brains are the most energy efficient computer that can be created.
Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.
Anyway, it’s just a philosophical thought experiement and I’ll rather discuss it with someone that’s a bit less of an dick.
My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you’re letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor… Yikes.
The efficient coding hypothesis also helps these exact ai, because it’s being used to justify research into neutral networks and emulating brain function is a huge goal.
My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you’re paranoid about.
Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just “solved” by dumping more energy at the problem.
You need to do more legwork than just saying “substrate independence”, addressed by my organoid thought experiment or “maybe we get Clarke tech or something technology crazy right” which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that’s also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.
Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.
If human brains can do it then it can be done. And it can probably be done better too. I don’t see any reason to assume our brains are the most energy efficient computer that can be created.
Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.
Anyway, it’s just a philosophical thought experiement and I’ll rather discuss it with someone that’s a bit less of an dick.