Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required. - GitHub - Acly/krita-ai-diffusion: Streamlined interface for gen…
This is wickedly cool and I was wanting something like this last night. Crazy that I stumble on it the morning after.
This is suuuper cool, but looks like having linux+amdgpu limits me to the cloud option.
I supposed this is bc we don’t have a DirectML equivalent yet.
There have been some efforts to run pytorch and StableDiffusion on ROCm. Not sure if that could be combined with this.
Crap, I was hoping to try it. I wonder if AMD will announce something in their FOSS / AI event.
It works today. Only problem I have is the memory management is pretty poor, and it’s pretty easy to run out of vram.
Rx7600 8GB + 5900X Rocm 5.7.1 Pytorch 2.1
Interesting! Got any links that explain how to set it up?
I just got a laptop with an RX 6700M 10GB ans am eager to try it :)
Not really. I’ve had to do quite a bit of experimentation.
My setup that I’ve settled on:
- Rocm system libraries from Arch Linux
- PyTorch nightly for Rocm pip installed into a venv (see instructions on pytorch homepage)
- Set HSA_OVERRIDE_GFX_VERSION to 11.0.0. This is just for the RX7600 and it tells it to use the RX7900 code as the pytorch version hasn’t been compiled with 7600 support.
- Start software.
Thanks!
CUDA core FOMO is real.
Nope. Quite happy with non-binary drivers that I can actually debug and send fixes back to AMD with.
I don’t disagree, but marked presedence is market presedence ^^; thank glob the industry moves fast on that though. I am also happy that AMD is bringing ROCm to general consumers by way of the Ryzen APU’s, rather than just pandering to enterprise clients like NVIDIA does.
Hmm, that’s weird. I was able to run Stable Diffusion locally with Linux + RX6600.
Probably because I used Easy Diffusion. At first, I couldn’t get the GPU acceleration to work, and I was constantly running out of RAM (Not using VRAM), so my system always froze and crashed.
Turns out it was a ROCM bug, that I don’t know if it’s fixed by now, but I remember “fixing it” by setting an environment variable to a previous version.
Then, it all worked really good. Took between 30 seconds to 2 minutes to make an image.
I got it to work yesterday. Have to go into the python venv it installs, remove torch and install it the way it describes on the comfyui GitHub.
this looks wicked. definitely going to check that out