• cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 months ago

      I fail to see what he or your comment has to do with Generative AI models, which is what we are talking about.

      I don’t think you fully understand how Generative AIs work. The input data is used in a similar, but far more rudimentary way, to learn as humans do. The model itself contains no recognizable original data, just a bunch of numbers, math and weights in an attempt to simulate the neurons and synaptic pathways that our brains form when we learn things.

      Yes, a carefully crafted prompt can get it to spit out a near identical copy of something it was trained on (assuming it had been trained on enough data of the target artist to begin with), but so can humans. In those cases humans have gotten in trouble when attempting to profit off it and therefore in that case justice must be served regardless of if it was AI or human that reproduced it.

      But to use something that was publicly available on the Internet for input is fair game just as any human might look at a sampling of images to nail down a certain style. Humans are just far more efficient at it with far far less needed data

        • cm0002@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Not all AIs do, the more “traditional” ones that you’re probably thinking of don’t. The ones that are generating text, images and video, however, are based on Generative Adversarial Networks a type of Deep learning Neural Network and those do learn albeit in a rudimentary fashion compared to humans, but learning none the less.