• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    7
    ·
    edit-2
    6 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I don’t see how you could realistically provide that guarantee.

    I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

    If we knew how to make AI – and this is going past just LLMs and stuff – avoid doing hazardous things, we’d have solved the Friendly AI problem. Like, that’s a good idea to work towards, maybe. But point is, we’re not there.

    Like, I’d be willing to see the state fund research on that problem, maybe. But I don’t see how just mandating that models be conformant to that is going to be implementable.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      7
      ·
      edit-2
      6 months ago

      Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

      • 5C5C5C@programming.dev
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        6
        ·
        6 months ago

        Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        9
        ·
        edit-2
        6 months ago
        1. There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don’t place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.

        2. Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn’t going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

        3. California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models. It just ensures that it’ll happen outside California. Like, it’ll have a negative economic impact on California, maybe, but it’s not going to have a globally-restrictive impact.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          12
          ·
          6 months ago

          Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

          My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.

          California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models.

          I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.

        • Mouselemming@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 months ago

          So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          6 months ago

          Indeed. If only Frankenstein’s Monster had been shunned nothing bad would have happened.

          • Warl0k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            You two may not be giving me enough credit for my choice of metaphors here.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        11
        ·
        6 months ago

        It’s not a monster. It doesn’t vaguely resemble a monster.

        It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.

        AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          6 months ago

          I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.

          Personally I do think it’s a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we’re still quite far away.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            6 months ago

            We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.

            The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.

    • joewilliams007@kbin.melroy.org
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.