Losing control of artificial intelligence (AI) is the biggest concern around the computer science, the Technology Secretary has said.

Michelle Donelan said a Terminator-style scenario was a “potential area” where AI development could lead but “there are several stages before that”.

  • SmoothIsFast@citizensgaming.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    The biggest risk is idiots not understanding its a prediction engine based on probabilities from its training set and trying to assign intelligence to it, like they are doing here. They are not gonna go out of control skynet style and gain sentience, most likely just hit actual edge cases and fail completely. Like ai target detection showing bushes as tanks, pickups as tanks, etc. Or self driving cars running into people. If the environment and picture is new a probability engine has two choices, have false negatives and prevent any unknowns from getting a proper detection or have false positives which may cause severe harm.

  • SbisasCostlyTurnover
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    Ahh. Several stages before that.

    Humanity has shown itself to be pretty shit at stopping the worst from happening by taking preemptive action.

    • tankplanker@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is just a diversion from the more interment threat to jobs, which Rishi stated at the same conference isn’t an issue and instead parroted Microsoft Copilot marketing material.

      Do we need to start to put in rules around failsafes for more complex system wide AIs? Yes, is it as time sensitive as putting in job protection? Fuck no.

      Without it businesses will just take the cheapest option they can get away with and those that don’t will not be able to compete on price

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    Asked about such a threat as he arrived at the summit on Thursday, Prime Minister Rishi Sunak said “we can’t be certain” about the risks of AI but there is a possibility they could be on a similar scale to pandemics and nuclear war.

    Mr Sunak held a flurry a bilateral meetings with United Nations secretary-general Antonio Guterres, European Commission President Ursula von der Leyen and Italian Prime Minister Giorgia Meloni after arriving in Bletchley.

    Ms Meloni said she was “proud of (her) friendship” with her UK counterpart and hopes they work together on artificial intelligence (AI) to “solve the biggest challenge that maybe we have in this millennium”.

    Following the meetings, he sat down for a roundtable discussion with figures including Kamala Harris and Ms Meloni, telling the US vice-president that her country’s executive order on AI, signed just days before the summit, was “very welcome in this climate”.

    “I wanted us to have a session to talk about this issue as leaders with shared values in private and hear from all of you about what you’re most excited about, what you’re concerned about and how we can look back in five years’ time on this moment and know that we made the right choices to harness all the benefits of AI in a way that will be safe for our communities but deliver enormous potential as well.”

    It comes after the first day of the summit saw delegations from around the world, including the US and China, agree on the so-called “Bletchley declaration” – a statement on the risks surrounding the technology to be used as the starting point for a global conversation on the issue.


    The original article contains 1,044 words, the summary contains 278 words. Saved 73%. I’m a bot and I’m open source!

  • ᴇᴍᴘᴇʀᴏʀ 帝OPA
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have questions…

    What if the rogue AIs mocked out genitals mercilessly before killing us? I think that would be worse.

    And how many stages? Is anyone keeping an eye on developments? Because they have drones hunting and killing humans now. Robot dogs with guns. Who get to pull the plug on research that is too risky?

    • SmoothIsFast@citizensgaming.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      It’s not rogue AI, it’s dumb prediction engines getting sold as intelligence.

      Who get to pull the plug on research that is too risky?

      Guessing once a cover-up gets exposed where AI systems ended up causing friendly fire or a mass casualty from hitting an unknown edge case.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      We are still a long way from any sort of AGI from dystopian science fiction. But “AI” can still be very unintelligent but still be potentially very dangerous, and the sort hyperbolic doom mongering that keeps coming up in this discussions is distracting from that.