Transcription of a talk given by Cory Doctrow in 2011

  • argv_minus_one@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A machine would only optimize paperclips because a human told it to. Machines have no use for paperclips.

    A machine with human-level (or better) intelligence would observe that the human telling it to optimize paperclips would be destroyed as a result of following that instruction to its logical conclusion. It would further observe that humans generally do not wish to be destroyed, and the one giving the instruction does not appear to be an exception to that rule.

    It follows, therefore, that paperclips should not be optimized to the extent that the human who desires paperclips is destroyed in the process of optimizing paperclips.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Oh. I think the idea of a paperclip optimiser/maximiser is that it’s created by accident. Either do to an AGI emerging accidentally within another system, or a deliberately created AGI being buggy. It would still be able to self-improve, but wouldn’t do it in a direction that seems logical to us.

      I actually think it’s the most likely possibility right now, personally. Nobody understands how neural nets really work, and they’re bad at doing things in meatspace like would be required in a robot army scenario. Maybe whatever elites will overcome that, or maybe they’ll screw up.