So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I’m not sure if my vram is enough for that Since now I always sticked with llamacpp since it’s quiet easy to setup Does anyone have any suggestion?

  • saplingtree@kbin.social
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    Just pay nvidia their ill-earned ounce of flesh. I say this as a strong AMD advocate.

    It’s clear that AMD isn’t serious about the AI market. They had years to provide a proper competitor to CUDA or at the very least a 1:1 compatibility layer. Instead of doing either of these things, AMD continued messing with half-assed projects like ROCm and the other one the name of which I don’t care to look up. AMD has the resources to build a CUDA compatible API in under 6 months but for some reason they don’t. I don’t know why they don’t, and at this point I don’t really care.

    Buy an AMD GPU for AI at your own risk.