this post was submitted on 24 Jun 2025
635 points (99.2% liked)

Technology

71955 readers
3180 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 123 points 3 days ago* (last edited 3 days ago) (29 children)

And this is how you know that the American legal system should not be trusted.

Mind you I am not saying this an easy case, it's not. But the framing that piracy is wrong but ML training for profit is not wrong is clearly based on oligarch interests and demands.

[–] [email protected] 24 points 3 days ago (22 children)

You should read the ruling in more detail, the judge explains the reasoning behind why he found the way that he did. For example:

Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.

This isn't "oligarch interests and demands," this is affirming a right to learn and that copyright doesn't allow its holder to prohibit people from analyzing the things that they read.

[–] [email protected] 2 points 2 days ago (1 children)

Isn't part of the issue here that they're defaulting to LLMs being people, and having the same rights as people? I appreciate the "right to read" aspect, but it would be nice if this were more explicitly about people. Foregoing copyright law because there's too much data is also insane, if that's what's happening. Claude should be required to provide citations "each time they recall it from memory".

Does Citizens United apply here? Are corporations people, and so LLMs are, too? If so, then imo we should be writing legal documents with stipulations like, "as per Citizens United" so that eventually, when they overturn that insanity in my dreams, all of this new legal precedence doesn't suddenly become like a house of cards. Ianal.

[–] [email protected] 4 points 2 days ago

Not even slightly, the judge didn't rule anything like that. I'd suggest taking a read through his ruling, his conclusions start on page 9 and they're not that complicated. In a nutshell, it's just saying that the training of an AI doesn't violate the copyright of the training material.

How Anthropic got the training material is a separate matter, that part is going to an actual try. This was a preliminary judgment on just the training part.

Foregoing copyright law because there's too much data is also insane, if that's what's happening.

That's not what's happening. And Citizens United has nothing to do with this. It's about the question of whether training an AI is something that can violate copyright.

load more comments (20 replies)
load more comments (26 replies)