this post was submitted on 04 Feb 2024
70 points (98.6% liked)
Meta (slrpnk.net)
763 readers
4 users here now
Here we can discuss anything about this Lemmy instance/server itself.
Our XMPP support chat: Movim or XMPP client.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unfortunately I saw it as well while scrolling, and reported it. What's the motivation behind posting fucked up shit like that?
I don't know the specifics, but trolling is trolling. It's experimenting with ways of breaking things. Not only do they probably find it funny, but if this isn't handled it can kill the platform. If they saw that Lemmy.World was defederated and shut down, that would make their day.
The point is that we need basic security measures to keep Lemmy functioning. I don't think this is just an issue of moderator response times. We need posts like that to get deleted after 10 people downvote it, and we need limits on how easily new accounts can get into everyones' front page feeds.
It should be reports and limited to users with some form of track record on the platform. So posted some time earlier, has gotten X likes, account age and similar measures to make sure it is not problematic.
Downvotes are a bad measure. They are often just done by somebody disagreeing with a post, which often are not exactly a problem. Also 10 is really low, when something really takes off. On the c/meme half the posts have more then 10downvotes, but nothing is really all that bad.
The best suggestion I have seen is to have a specific report category for CSAM. If a post is reported for CSAM x number of times, the post is hidden for moderator review. If it is a false report, the mod bans the reporting accounts.
Another issue is that post links can be edited. Trolls will definitely use this feature for abuse.
Ranking algorithms need to be adjusted so that if a post is removed like this, and then restored, it gets the same number of views it otherwise would have. Without that, a user-interaction driven automatic removal will get abused at scale.