Republican state Rep. Alex Kolodin said he used ChatGPT to write a subsection of House Bill 2394, which tackles AI-related impersonations of people by allowing Arizona residents to legally assert they are not featured in deepfake videos.

“I used it to write the part of the bill that had to do with defining what a deepfake was,” Kolodin told NBC News. “I was really struggling with the technical aspects of how to define what a deepfake was,” he said. “So I thought to myself, ‘Well, why not ask the subject matter expert, ChatGPT?’”

The bill was signed into law by Democratic Gov. Katie Hobbs on Tuesday. The legislation allows Arizona residents to obtain a court order stating the person identified in the deepfake video is not them.

Kolodin said that the portions ChatGPT created were precise.

“In fact, the portion of the bill that ChatGPT wrote was probably one of the least amended portions,” he said.

Hobbs was not aware of the portion of the legislation being authored by ChatGPT.

“I kind of wanted it to be a surprise once the bill got signed,” Kolodin said, noting that it was part of the plan.

  • PriorityMotif@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    7 months ago

    hate the drivel about “false information” when using an LLM. You can use it to help write things you struggle to describe and rewrite things in a certain style. You don’t have to use it for direct information.

    Rewrite the previous text as if you are a legislator.

    I strongly oppose the excessive focus on “false information” concerns surrounding the use of large language models (LLMs). These models can be valuable tools for assisting in writing tasks, helping to articulate ideas more clearly, and adapting content to various styles. Their use can enhance the quality and versatility of written communication.