this post was submitted on 12 Jun 2025
22 points (62.2% liked)

Technology

71585 readers
3603 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 5 days ago (8 children)

Hallucinations mean something specific in the context of AI. It's a technical term, same as "putting an app into a sandbox" doesn't literally mean that you pour sand into your phone.

Human hallucinations and AI hallucinations are very different concepts caused by very different things.

[–] [email protected] 3 points 5 days ago (7 children)

No it's not. Hallucinations is marketing to make the fact that llms are unreliable sound cool. Simple as

[–] [email protected] 3 points 5 days ago (6 children)

Nope. Hallucinations are not a cool thing. They are a bug, not a feature. The term itself is also far from cool or positive. Or would you think it's cool if humans have hallucinations?

[–] [email protected] 2 points 5 days ago (2 children)

'Hallucinations' are not a bug though; it's working exactly as intended and this is how it's designed. There's no bug in the code that you can go in and change that will 'fix' this.

LLMs are impressive auto-complete, but sometimes the auto-complete doesn't spit out factual information because LLMs don't know what factual information is.

[–] [email protected] 1 points 5 days ago

They aren't a technical bug, but an UX bug. Or would you claim that an LLM that outputs 100% non-factual hallucinations and no factual information at all is just as desirable as one that doesn't do that?

Btw, LLMs don't have any traditional code at all.

[–] dragonfly4933 1 points 5 days ago

I don't think calling hallucinations a bug is strictly wrong, but it's also not working as intended. The intent is defined by the developers or the company, and they don't want hallucinations because that reduces the usefulness of the models.

I also don't think we know that it is a fact that this is a problem that can't be solved in current technology, we simply have not found any useful solution.

load more comments (3 replies)
load more comments (3 replies)
load more comments (3 replies)