this post was submitted on 21 Jun 2023
15 points (100.0% liked)

/0

2021 readers
1 users here now

Meta community. Discuss about this lemmy instance or lemmy in general.

Service Uptime view

founded 2 years ago
MODERATORS
db0
 

The botted account proliferation is escalating fast. Almost all top instances one can see in the fediverse observer are now bot-filled ones

I contacted their team to ask them to gather more details so that we can use the Overseer to combat this in advance, but they're disinterested in helping so no assist will come from there. Fairly disheartening I must say.

I was not planning to get into this sort of api polling, so I'll have to see how feasible it is for me to develop it the overseer itself.

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 2 years ago (1 children)

DbZ, how feasible is it to have some sort of analysis tools to detect bots regardless of their home instance.

Right now they are proliferating in specific instances but those are easier to block. More concerning is the use of bots via established instances.

Since we can see the user profiles of even remote users, how likely is it that we can detect bots accounts by running some sort of analysis? The benefit here is that it should be able to detect even human-run astroturfing accounts.

[–] db0 1 points 2 years ago (1 children)

I have no idea how I would be able to achieve this tbh. If someone tells me the methodology, I could implement it

[–] [email protected] 2 points 2 years ago

I'm not familiar with locally run text analysis models, but there should be a way to analyze comments and flag them based on certain topics.

This is a far more complicated approach but I suspect it could be ultimately necessary.

Say you've got a user who's been shilling a certain product regularly. They've been active 24/7 without seemingly sleeping and/or share the same IP, ISP or IP geolocation as other users with a similar pattern. You should be able to have a confidence metric that this user is involved in an astroturfing campaign. (Although you won't and should not get IP addresses for remote users).

Slightly offtopic but a funny thought: I wonder if there's a way to engage with these bots, ask them something that an AI would respond to in a flawed manner and detect them based on their replies or lack of replies.

Edit: I just remembered that I read that even if not displayed in the UI, upvotes are public because they correspond to favorites on Mastodon. It should be much easier to find a correlation of certain accounts consistently upvoting or downvoting certain posts

[–] Stanley_Pain 2 points 2 years ago (2 children)

As an side not to what's going on with bots. I'm noticing a LOT of communities being open by the same person and the content within feels AI generated (usually images). Anyone else noticing this or am I too deep in tin foil hat territory?

[–] Flatworm7591 2 points 2 years ago (1 children)

I've noticed this happening on Facebook too, my feed is suddenly full of generated art. It's super annoying and non of it is tagged as AI generated, but it obviously is. A lot of stupid boomers seem to think it's real judging by the comments, though I suppose the comments could be bot accounts too. If social media platforms don't build tools to counter AI generated content we are all going to drown in it before long. I don't mind it if it's tagged as such and in it's own community I suppose. Short of some sort of whitelist system, I have no idea how they are going to stop it - I mean moderators simply won't be able to keep up with the volume of AI generated posts at some point.

[–] Stanley_Pain 0 points 2 years ago

I think we're in for a bit of a flood with this stuff. I had to unfollow a and subsequently block a bunch of nsfw instances because it was just awful AI image spam.

[–] db0 1 points 2 years ago (1 children)

which person, which communities?

[–] Stanley_Pain 0 points 2 years ago (2 children)

Not on this instance. Was more of a general comment.

But as an example..

the User AkhuyanATlemmy.world

[–] db0 2 points 2 years ago (1 children)

Ah you mean elsewhere? Yes it's very possible spammers are already "building up" accounts for later.

[–] Stanley_Pain 1 points 2 years ago

That sucks, but I guess inevitable.

[–] ryven 2 points 2 years ago (1 children)

Unless I'm the one misunderstanding, they aren't opening a bunch of communities, they're a mod of [email protected], where they make posts alerting users that new communities have been created. Their posts look weird because they're using a template that grabs the post title from the community description, so if it says something strange then their post will too, but those aren't their communities.

[–] Stanley_Pain 1 points 2 years ago

I thought they were creating those communities as well.