• Nacarbac [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    I don’t think that actually follows. We’d certainly be in a position to practice and refine the process, but not necessarily guarantee that it’s working until we give the (apologies for the Harry Potter reference, but I think it apt) Robot House Elf a pistol and turn around. Also, ethics.

    Luckily the simple solution is to just not make a sapient slave race, robotic or otherwise. Sapience isn’t necessary for an autonomous tool.

    • Saeculum [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      My point of view is that in humans and animals in general, emotions are largely a chemical response in the brain. We might not fully understand how those processes interact, but we do know that certain chemicals cause certain feelings, and that there is a mechanism in the brain governing emotion that is notionally separate from our ability for rational thought.

      I am willing to concede that it might be possible for a sufficiently complex computer to accidentally or in a way not entirely within our understanding to develop the capacity for rational thought in a way that we would recognise as sapient, or at least animal level intelligence.

      I am not willing to concede that such a computer could develop a capacity for what we recognise as emotion without it being intentionally designed in, and if it’s designed we necessarily need to understand it. This happens in fiction a lot because it’s more compelling to anthropomorphize AI characters, not because it’s particularly plausible.