The comments come amid increased attention on a global AI race between the U.S. and China.

  • Free_Opinions
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    1 day ago

    This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.

    • davidgro@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      It also applies to technologies that don’t in fact exist but could. Those are much harder to name (besides sci-fi) since almost by definition we don’t know about most of them. Nor how many, compared to existing tech.

      I’m not actually saying it’s impossible, just saying that local maximums (as described by the other users here) are a thing and it’s possible to be trapped for a very long time by them. Potentially forever, but you’re right that odds of breaking out do increase over time.

      • Free_Opinions
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        1 day ago

        Yeah, I agree with all of this. What I’m pushing back against is the absolute, dismissive tone some people take whenever the potential dangers of AGI are brought up. Once someone is at least willing to accept the likely reality that we’ll have AGI at some point, then we can move on to debating the timescale.

        If an asteroid impact were predicted 100 years from now, at what point should we start taking steps to prevent it? Framing it this way makes it feel more urgent - at least to me.