Yep. With a relatively modern midrange computer and the most basic of technical knowledge anyone can set up and run at least Stable Diffusion (and if they have an NVidia GPU “relatively modern” extends back to like the better 10 series cards from over a decade ago) and do basically anything with it, limited primarily by their VRAM and RAM vs the image size.
The one saving grace is that despite how trivially accessibly extremely powerful tools are, most of the AI enthusiast community is comprised of dipshit chuds who struggle to operate a simple prompt input box on something like A1111 and cry about how hard and confusing comfyui - which is literally just a node based flowchart that holds your hand through the whole process - is to use.
At this point there’s no putting the brakes on it, no. All the tools someone could ever need already exist and either have a broad use (basic computer hardware) or are free programs and relatively small model weight files - wiping out image generating AI at this point would be like trying to wipe out media piracy.
And it’s only going to get worse as the relatively crude and inefficient algorithms are improved further. Flux itself required a 3090 or 4090 when it was released because of its extremely high VRAM requirements, and now after just a few weeks people have managed to squeeze it down to run on old 10 series cards with 8GB of VRAM. That’s a terrifying photorealistic generator running on ancient hardware, even if it doesn’t run well.
Even though the AI bubble is going to burst because actual tech companies are struggling to monetize shitty proprietary image generators and garbage LLMs with no use value and their investors are getting impatient and annoyed at their losses, the tech itself isn’t going anywhere and even if the major tech funding dries up there’s a ton of open source independent work being done by enthusiasts.
I think I understand. Still hate it, but I think I understand anyway.
Reminds me of how “deepfakes” are sort of home grown and open source creep shit now.
Yep. With a relatively modern midrange computer and the most basic of technical knowledge anyone can set up and run at least Stable Diffusion (and if they have an NVidia GPU “relatively modern” extends back to like the better 10 series cards from over a decade ago) and do basically anything with it, limited primarily by their VRAM and RAM vs the image size.
The one saving grace is that despite how trivially accessibly extremely powerful tools are, most of the AI enthusiast community is comprised of dipshit chuds who struggle to operate a simple prompt input box on something like A1111 and cry about how hard and confusing comfyui - which is literally just a node based flowchart that holds your hand through the whole process - is to use.
Sounds like there’s absolutely no means to restrict deepfaking, including deepfaking of children. What could go wrong?
At this point there’s no putting the brakes on it, no. All the tools someone could ever need already exist and either have a broad use (basic computer hardware) or are free programs and relatively small model weight files - wiping out image generating AI at this point would be like trying to wipe out media piracy.
And it’s only going to get worse as the relatively crude and inefficient algorithms are improved further. Flux itself required a 3090 or 4090 when it was released because of its extremely high VRAM requirements, and now after just a few weeks people have managed to squeeze it down to run on old 10 series cards with 8GB of VRAM. That’s a terrifying photorealistic generator running on ancient hardware, even if it doesn’t run well.
Even though the AI bubble is going to burst because actual tech companies are struggling to monetize shitty proprietary image generators and garbage LLMs with no use value and their investors are getting impatient and annoyed at their losses, the tech itself isn’t going anywhere and even if the major tech funding dries up there’s a ton of open source independent work being done by enthusiasts.
Fuck.