There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.
I’m testing 14B Qwen DeepSeek R1 through ollama and it’s impressive. I would think I could switch most of my current usage of chatgpt to this one (not alot I should admit though). Hardware is amd 7950x3d with nvidia 3070 ti. Not the cheapest hardware but not the most expensive either. It’s of course not as good as the full model on deepseek.com but I can run it truly locally, right now.
There’s a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there’s l a literal open source model that can do the basics.
You still need an expensive hardware to run it. Unless myceliumwebserver project will start
I’m testing 14B Qwen DeepSeek R1 through ollama and it’s impressive. I would think I could switch most of my current usage of chatgpt to this one (not alot I should admit though). Hardware is amd 7950x3d with nvidia 3070 ti. Not the cheapest hardware but not the most expensive either. It’s of course not as good as the full model on deepseek.com but I can run it truly locally, right now.