![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/2QNz7bkA1V.png)
131·
11 months agoI believe this phenomenon is called “artificial hallucination”. It’s when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.
I believe this phenomenon is called “artificial hallucination”. It’s when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.
Are you sure AMD CPUs are safer?
https://arxiv.org/pdf/2108.04575.pdf