this post was submitted on 22 Jun 2025
766 points (94.5% liked)

Technology

71800 readers
4099 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 day ago (2 children)

I'm interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don't yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it'll be a giant mess of contradictions.

Sure, the existing corpus of knowledge doesn't all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it'd see a higher order consistency to this. If anything that'll help it understand that there are different views in the world.

[โ€“] [email protected] 5 points 1 day ago

LLMs are prediction tools. What it will produce is a corpus that doesn't use certain phrases, or will use others more heavily, but will have the same aggregate statistical "shape".

It'll also be preposterously hard for them to work out, since the data it was trained on always has someone eventually disagreeing with the racist fascist bullshit they'll get it to focus on. Eventually it'll start saying things that contradict whatever it was supposed to be saying, because statistically eventually some manner of contrary opinion is voiced.
They won't be able to check the entire corpus for weird stuff like that, or delights like MLK speeches being rewriten to be anti-integration, so the next version will have the same basic information, but passed through a filter that makes it sound like a drunk incel talking about asian women.

load more comments (1 replies)