this post was submitted on 27 Jun 2025
52 points (98.1% liked)

PieFed Meta

1122 readers
26 users here now

Discuss PieFed project direction, provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics.

Wiki

founded 2 years ago
MODERATORS
 

A way for people to screen out AI generated content, similar how people can screen out nsfw content already.

It works the same as the nsfw filter except underneath it's an integer instead of a Boolean.

Content authors will flag their content as AI-generated, on a sliding scale from 0 (no AI) to 100 (completely AI generated. Intermediate values could be used too although this amount of nuance may be confusing for some.

Users will be able to set a "AI generated threshold" which filters out content above that threshold. At the UI level this could be presented as a checkbox but maybe a slider would be good.

Mods need to be able to set the AI level on content, as they do now with NSFW content.

Communities will have an AI-gen value too, which is automatically applied to all content within. Instance admins can override this value too, for local or remote communities.

Thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 days ago (3 children)

I would guess None, Some, All would be sufficient since people's notion of where on the slider they should land will be highly variable anyway. Then users can filter on None, Some, All.

I guess I haven't noticed much AI content, but maybe it's just my subs -- is it mainly art-related? Or memes or something?

[–] [email protected] 3 points 3 days ago (2 children)

Other social networks are far more overrun with AI.

Facebook recently has a requirement to flag all AI content. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/

[–] [email protected] 1 points 3 days ago (1 children)

I see. I don't really use other social media so I'm out of the loop. What do you think the long run (5-10 years) looks like, as far as social network administration and moderation?

[–] [email protected] 1 points 3 days ago* (last edited 3 days ago)

Hard to say, it's not really my main area of expertise. I just code stuff.

Seems like as things scale the problems moderation gets exponentially harder . If the fediverse stays the same size or shrinks, we'll be fine with the tools we have now.

It would be nice if we could build the beginnings of stronger capabilities now, though. Then if an explosion of growth happens we'll cope a little better. FediThreat and FIRES are interesting.