Flippanarchy
Flippant Anarchism. A lighter take on social criticism with the aim of agitation.
Post humorous takes on capitalism and the states which prop it up. Memes, shitposting, screenshots of humorous good takes, discussions making fun of some reactionary online, it all works.
This community is anarchist-flavored. Reactionary takes won't be tolerated.
Don't take yourselves too seriously. Serious posts go to [email protected]
Rules
-
If you post images with text, endeavour to provide the alt-text
-
If the image is a crosspost from an OP, Provide the source.
-
Absolutely no right-wing jokes. This includes "Anarcho"-Capitalist concepts.
-
Absolutely no redfash jokes. This includes anything that props up the capitalist ruling classes pretending to be communists.
-
No bigotry whatsoever. See instance rules.
-
This is an anarchist comm. You don't have to be an anarchist to post, but you should at least understand what anarchism actually is. We're not here to educate you.
-
No shaming people for being anti-electoralism. This should be obvious from the above point but apparently we need to make it obvious to the turbolibs who can't control themselves. You have the rest of lemmy to moralize.
Join the matrix room for some real-time discussion.
view the rest of the comments
I never use these LLMs cause I have a brain and I'm not artistically inclined to use it for audiovisual creation, but today I thought 'why not?' and gave it a try. So I asked ChatGPT to provide me with 80 word biographies of the main characters of LOGH and, besides being vague, it made pretty big mistakes on pretty much every summary and went fully off the rails after the 4th character... It's not even debatable information (fiction books plus anime, no conflicting narratives here) and it's all easily available online. I can't even imagine relying on it for anything more serious than summing up biographies for anime characters, lol, cause even that it couldn't do right!
Asking a LLM something is the equivalent of asking strangers on the internet and allowing non-serious answers too
That's because that's what LLMs are trained on. Random comments from people on the internet, including troll posts and jokes which the LLM takes as factual most of the times.
Remember when Google trained their AI on reddit comments and it put out incredibly stupid answers like mixing glue in your cheese sauce to make it thicker?
https://www.reddit.com/r/LinusTechTips/comments/1czj9rx/google_ai_gives_answers_they_find_on_reddit_with/
Or that one time it suggested that people should eat a small rock every day because it was fed an Onion article?
The old saying: "Garbage in, garbage out." fits extremely well for LLMs. Considering the amount of data being fed to these LLMs it's almost impossible to sanitize them and the LLMs are nowhere close to being able to discern jokes, trolls or sarcasm.
Oh yea also it came out some researchers used LLMs to post reddit comments for an experiment. So yea, the LLMs are being fed with other LLM content too. It's pretty much a human-centipede situation.
But yea, I wouldn't trust these models for anything but the most simplest of tasks and even there I would be pretty circumspect of what they give me.
Do you subscribe to the idea that LLMs will degrade overtime after recycling their own shit for several years like a gif/jpeg rencoded for the umpteenth time
Honestly? Yea. The training data matters, that's why all these AI companies are looking for data generated by humans. Feeding them with LLM data would most likely end up in nonsensical stuff pretty fast.