I realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that's not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.
This need for continued use is why, for example, Google's AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they're getting what they wanted, it's mission accomplished.
The first time I tried this was with my kid's crib. I ended up forgetting it was there and I had to borrow tools from the movers instead. I got most of the way through the disassembly before finding the hidden allen key and realizing I had outsmarted myself.