There’s no such thing as an LLM “knowing” an answer. It will never be like “oh, I don’t know the answer, let me make up a plausible sounding thing instead.” It just will always make a plausible sounding answer whenever it can. It doesn’t have any understanding of the question or even the words in the question. It’s like a very advanced cargo cult of words and language. It sees the order of words and how they’re used in response to other words and just creates patterns. It has literally no understanding of those words. It’s just really good at patterns so it can be correct a lot of the time. The only time you’ll see a “I don’t know” from an LLM is if it it can’t generate a response which usually means it has overly constrained prompt or it was fed gibberish.
I don’t know who you guys think needs those semi-correct explanations filled with half personal opinion. It seems somewhat obvious that the mentioned understanding of patterns likely results in something that could well be considered understanding.
There’s no such thing as an LLM “knowing” an answer. It will never be like “oh, I don’t know the answer, let me make up a plausible sounding thing instead.” It just will always make a plausible sounding answer whenever it can. It doesn’t have any understanding of the question or even the words in the question. It’s like a very advanced cargo cult of words and language. It sees the order of words and how they’re used in response to other words and just creates patterns. It has literally no understanding of those words. It’s just really good at patterns so it can be correct a lot of the time. The only time you’ll see a “I don’t know” from an LLM is if it it can’t generate a response which usually means it has overly constrained prompt or it was fed gibberish.
I don’t know who you guys think needs those semi-correct explanations filled with half personal opinion. It seems somewhat obvious that the mentioned understanding of patterns likely results in something that could well be considered understanding.