Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Thank you. This is a good article. Are there any good book length things I could read on this topic?
I do not know. Perhaps Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell.