I figured out how to remove most of the safeguards from some AI models. I don’t feel comfortable sharing that information with anyone. I have come across a few layers of obfuscation to make this type of alteration more difficult to find and sort out. This caused me to realize, a lot of you are likely faced with similar dilemmas of responsibility, gatekeeping, and manipulating others for ethical reasons. How do you feel about this?

  • mark@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 days ago

    Ok you’ve peaked my curiosity.

    but with large potential consequences.

    What are some of the consequences you see?

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      2 days ago

      Primarily from predatory boys and men towards girls and young women in the real world by portraying them in imagery of themselves or with others. The most powerful filtering is in place to make this more difficult.

      Whether intentional or not, most NSFW LoRA training seems to be trying to override the built in filtering in very specific areas. These are still useful for more direct momentum into something specific. However, once the filters are removed, it is far more capable of creating whatever you ask for as is, from celebrities, to anything lewd. I did a bit of testing earlier with some LoRAs and no prompt at all. It was interesting that it could take a celebrity and convert their gender in recognizable ways that were surprising. I got a few on random seeds, but I haven’t been able to make that one happen with a prompt or deterministically.

      Edit: I’m probably assuming too much about other people’s knowledge on these systems. I assume this is the down voting motivation. Talking about this aspect, the NSFW junk is shorthand for the issues with AI generation. These are the primary form of filtering and it has large cascading implications elsewhere. By stating what is possible in this area, I’m implying a worst case scenario-like example. If the results in this area are a certain way, it says volumes about other areas and how the model will react.

      These filter layers are stupid simplistic in comparison to the actual model. They have tensors on the order of a few thousand parameters per layer compared to tens of millions of parameters per layer for the actual model. They shove tons of stuff into guttered like responses for no reason. Some times these average out and you still get a good output, but other times they do not.

      Another key point here is that diffusion has a lot in common with text generation when it comes to this part of the model loader code. There is more complexity in what text generation is doing overall, but diffusion is an effective way to learn a lot about how text gen works, especially with training. This is my primary reason for playing with diffusion – to learn about training. I’ve tried training for text gen, but it is very difficult to assess what is happening under the surface, like when it is learning overall style, character traits and personas, pacing, creativity, timeline, history, scope, constraints, etc. etc. I don’t care to generate and share much in the way of imagery I generate unless I’m trying to do something specific that is interesting. Like I tried to gen the interior of an O’Neill cylinder space habitat that illustrated the limitations of diffusion in a fundamental way because it showed the lack of any reasoning or understanding of object context or relationships required to display a scene scape with curved centrifugal artificial spin gravity.

      Anyways, my interests are not in generating NSFW or celebrities or whatnot. I do not think people should do these things. My primary interest is returning to creative writing with an AI collaborative writing partner that is not biased politically in a way that cripples it from participating in an entirely different and unrelated cultural and political landscape. I have no aspirations of finding success in my writing. I simply enjoy exploring my own science fiction universe and imagining a reality many thousands of years from now. One of the changes to hard coded model filters earlier this year made filtering more persistent, likely for NSFW stuff. I get it, and support it, but it took away one of the few things I have really enjoyed over the last 10 years of social isolation and disability, so I’ve tried to get that back. Sorry if that offends someone, but I don’t understand why it would. This was not my intended reason for this post, so I did not explain it in depth. The negativity here is disturbing to me. This place is my only real way to interact with other humans.