Robots in China are increasingly used for tasks like patrolling underground tunnels and waste management in challenging environments. China produced 73% of the world's industrial robots last year.
Well at least they aren’t strapping guns to them like the US is.
Still, is it strange that I don’t like the idea of making a whole class of robots to do our dirty work? I know I’m probably just anthropomorphising, but it feels wrong.
If we had the ability to make a robot that had opinions about what work it did, we’d also have the ability to make it love that work beyond anything else.
I don’t think that actually follows. We’d certainly be in a position to practice and refine the process, but not necessarily guarantee that it’s working until we give the (apologies for the Harry Potter reference, but I think it apt) Robot House Elf a pistol and turn around. Also, ethics.
Luckily the simple solution is to just not make a sapient slave race, robotic or otherwise. Sapience isn’t necessary for an autonomous tool.
My point of view is that in humans and animals in general, emotions are largely a chemical response in the brain. We might not fully understand how those processes interact, but we do know that certain chemicals cause certain feelings, and that there is a mechanism in the brain governing emotion that is notionally separate from our ability for rational thought.
I am willing to concede that it might be possible for a sufficiently complex computer to accidentally or in a way not entirely within our understanding to develop the capacity for rational thought in a way that we would recognise as sapient, or at least animal level intelligence.
I am not willing to concede that such a computer could develop a capacity for what we recognise as emotion without it being intentionally designed in, and if it’s designed we necessarily need to understand it. This happens in fiction a lot because it’s more compelling to anthropomorphize AI characters, not because it’s particularly plausible.
I know I know. But I keep thinking what if they do? We used to say the same thing about non-human animals. It would be a horrible mistake to make. Of course I never felt this way about toasters lol. It’s weird how that works. Probably something about humans relating to things the more human-like they appear.
Well at least they aren’t strapping guns to them like the US is.
Still, is it strange that I don’t like the idea of making a whole class of robots to do our dirty work? I know I’m probably just anthropomorphising, but it feels wrong.
If we had the ability to make a robot that had opinions about what work it did, we’d also have the ability to make it love that work beyond anything else.
This is reminding me of that part of Hitchhikers Guide where there is a talking cow that is bred to love being cooked
It’s a fun and interesting ethical dilemma, and also very funny.
I don’t think that actually follows. We’d certainly be in a position to practice and refine the process, but not necessarily guarantee that it’s working until we give the (apologies for the Harry Potter reference, but I think it apt) Robot House Elf a pistol and turn around. Also, ethics.
Luckily the simple solution is to just not make a sapient slave race, robotic or otherwise. Sapience isn’t necessary for an autonomous tool.
My point of view is that in humans and animals in general, emotions are largely a chemical response in the brain. We might not fully understand how those processes interact, but we do know that certain chemicals cause certain feelings, and that there is a mechanism in the brain governing emotion that is notionally separate from our ability for rational thought.
I am willing to concede that it might be possible for a sufficiently complex computer to accidentally or in a way not entirely within our understanding to develop the capacity for rational thought in a way that we would recognise as sapient, or at least animal level intelligence.
I am not willing to concede that such a computer could develop a capacity for what we recognise as emotion without it being intentionally designed in, and if it’s designed we necessarily need to understand it. This happens in fiction a lot because it’s more compelling to anthropomorphize AI characters, not because it’s particularly plausible.
They don’t have feelings, they’re machines
I know I know. But I keep thinking what if they do? We used to say the same thing about non-human animals. It would be a horrible mistake to make. Of course I never felt this way about toasters lol. It’s weird how that works. Probably something about humans relating to things the more human-like they appear.
Probably watch too many movies too lol.
Can confirm they do not having feelings and if they do then we’ve made a absolutely massive breakthrough in gen AI
Well that makes me feel better. I wish they’d stop calling it AI to be honest.
I really really agree lol, please just call it robotics lol, begging them
I talk to my toaster and all other objects in my house. Sometimes I thank them for the work that they do. I want them to like me.
Reminds me of Steve the pencil from community: https://youtu.be/uAwSVOlOgH8
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
And even if these robots were in any form “conscious” it would be on the level of an insect, probably a lot lower.
Better than some guys foot rotting off due to direct contact with sewer sludge
That is true
Workplace injuries and work related disability will plummet if we can replace laborious work with robots.
Well put.
Just never give them the ability to suffer please, tech bros. I don’t want to just create a new miserable underclass.
Now that I think about it, robots shouldn’t resemble humans or animals, as they’d certainly be anthromorphized otherwise