The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.
At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…
This is an answer that resonates with me because it feels so correct.