Fluffles

joined 2 years ago
[–] [email protected] 1 points 2 years ago (2 children)
[–] [email protected] 13 points 2 years ago (10 children)

I believe this phenomenon is called "artificial hallucination". It's when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.

[–] [email protected] 3 points 2 years ago (1 children)