Robots in China are increasingly used for tasks like patrolling underground tunnels and waste management in challenging environments. China produced 73% of the world's industrial robots last year.
If we had the ability to make a robot that had opinions about what work it did, we’d also have the ability to make it love that work beyond anything else.
I don’t think that actually follows. We’d certainly be in a position to practice and refine the process, but not necessarily guarantee that it’s working until we give the (apologies for the Harry Potter reference, but I think it apt) Robot House Elf a pistol and turn around. Also, ethics.
Luckily the simple solution is to just not make a sapient slave race, robotic or otherwise. Sapience isn’t necessary for an autonomous tool.
My point of view is that in humans and animals in general, emotions are largely a chemical response in the brain. We might not fully understand how those processes interact, but we do know that certain chemicals cause certain feelings, and that there is a mechanism in the brain governing emotion that is notionally separate from our ability for rational thought.
I am willing to concede that it might be possible for a sufficiently complex computer to accidentally or in a way not entirely within our understanding to develop the capacity for rational thought in a way that we would recognise as sapient, or at least animal level intelligence.
I am not willing to concede that such a computer could develop a capacity for what we recognise as emotion without it being intentionally designed in, and if it’s designed we necessarily need to understand it. This happens in fiction a lot because it’s more compelling to anthropomorphize AI characters, not because it’s particularly plausible.
If we had the ability to make a robot that had opinions about what work it did, we’d also have the ability to make it love that work beyond anything else.
This is reminding me of that part of Hitchhikers Guide where there is a talking cow that is bred to love being cooked
It’s a fun and interesting ethical dilemma, and also very funny.
I don’t think that actually follows. We’d certainly be in a position to practice and refine the process, but not necessarily guarantee that it’s working until we give the (apologies for the Harry Potter reference, but I think it apt) Robot House Elf a pistol and turn around. Also, ethics.
Luckily the simple solution is to just not make a sapient slave race, robotic or otherwise. Sapience isn’t necessary for an autonomous tool.
My point of view is that in humans and animals in general, emotions are largely a chemical response in the brain. We might not fully understand how those processes interact, but we do know that certain chemicals cause certain feelings, and that there is a mechanism in the brain governing emotion that is notionally separate from our ability for rational thought.
I am willing to concede that it might be possible for a sufficiently complex computer to accidentally or in a way not entirely within our understanding to develop the capacity for rational thought in a way that we would recognise as sapient, or at least animal level intelligence.
I am not willing to concede that such a computer could develop a capacity for what we recognise as emotion without it being intentionally designed in, and if it’s designed we necessarily need to understand it. This happens in fiction a lot because it’s more compelling to anthropomorphize AI characters, not because it’s particularly plausible.