this post was submitted on 29 Aug 2023
67 points (95.9% liked)

Technology

71998 readers
3101 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google's DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

all 21 comments
sorted by: hot top controversial new old
[–] [email protected] 37 points 2 years ago (1 children)

Why it matters: It's become increasingly hard for people to distinguish between images made by humans and those generated by AI programs. Google and other tech giants have pledged to develop technical means to do so.

You don't need a watermark for good intentions. A bad actor doesn't put a watermark on it. A watermark may hurt because the broad mass will think "if there's no watermark, the image is real".

[–] [email protected] 34 points 2 years ago* (last edited 2 years ago) (3 children)

TBF, I don't think the purpose of this watermark is to prevent bad people for passing AI as real. It would be a welcome side-effect but that's not why google wants this. Ultimately this is supposed to prevent AI training data from being contaminated with other AI generated content. You could imagine if the data set for training contains a million images generated with previous models having mangled fingers and crooked eyes, it would be hard to train a good AI out of that. Garbage in, garbage out.

[–] [email protected] 13 points 2 years ago (1 children)

So theoretically, those of us who put original images online could add this invisible watermark to make AI models leave our stuff out of their "steal this" pile?

[–] [email protected] 5 points 2 years ago

Yea actually, that has a good "taste your own medecine" vibe

[–] [email protected] 2 points 2 years ago (1 children)

AI-generated images are becoming increasingly realistic, AI can't tell them apart anymore.

[–] [email protected] 15 points 2 years ago

iirc AI models becoming worse after being trained with AI generated data is an actual issue right now. Even if we (or the AI) can't distinguish them from real images there are subtle differences that can be compounded into quite large differences if the AI is fed its own work over several generations and lead to a degraded output.

[–] [email protected] 2 points 2 years ago

I’m not sure that’s the case. For instance, a lot of smaller local models leverage GPT4 to generate synthetic training data, which drastically improves the model’s output quality. The issue comes in when there is no QC on the model’s output. The same applies to Stable Diffusion.

[–] [email protected] 4 points 2 years ago

Spoiler - they will secretly have all humans in ai generated art have slightly messed up hands.

Mind blown!

[–] [email protected] 4 points 2 years ago* (last edited 2 years ago)

says so in article:

The watermark is part of a larger effort by Google and other tech giants to develop ways to verify the authenticity of AI-generated images.

This is important because AI-generated images are becoming increasingly realistic, and it can be difficult to tell them apart from real images.

[–] [email protected] 1 points 2 years ago

Its invisibility should really help all the laypeople see it clearly.