Ah, yes, the IQ benefits of letting machines think for you and blindly trusting what they spit out, while you atrophy your own critical thinking skills and lose the ability to have independent critical thought or produce any kind of useful analysis without said machines, so you develop a need to consume gallons of water and burn square acres of forest every time you need to do any kind of analysis or write anything of substance... yeah, those IQ benefits. Sure.
Flippanarchy
Flippant Anarchism. A lighter take on social criticism with the aim of agitation.
Post humorous takes on capitalism and the states which prop it up. Memes, shitposting, screenshots of humorous good takes, discussions making fun of some reactionary online, it all works.
This community is anarchist-flavored. Reactionary takes won't be tolerated.
Don't take yourselves too seriously. Serious posts go to [email protected]
Rules
-
If you post images with text, endeavour to provide the alt-text
-
If the image is a crosspost from an OP, Provide the source.
-
Absolutely no right-wing jokes. This includes "Anarcho"-Capitalist concepts.
-
Absolutely no redfash jokes. This includes anything that props up the capitalist ruling classes pretending to be communists.
-
No bigotry whatsoever. See instance rules.
-
This is an anarchist comm. You don't have to be an anarchist to post, but you should at least understand what anarchism actually is. We're not here to educate you.
-
No shaming people for being anti-electoralism. This should be obvious from the above point but apparently we need to make it obvious to the turbolibs who can't control themselves. You have the rest of lemmy to moralize.
Join the matrix room for some real-time discussion.
AI summaries suck ass. Google rolled them out for group texts and the first time I realized this was when I got a bunch of group texts from my family saying my grandma was in the hospital and my sister was going to go visit her. The AI summary said my sister was in the hospital and my mom was going to visit her. No mention of my grandma at all. I immediately turned off these summaries because they were worse than not having anything.
Every single time I have tried to extract information from them in a field I know stuff about it has been wrong.
When the Australian government tried to use them for making summaries in every single case it was worse than the human summary and in many it was actively destructive.
Play around with your own local models if you like, but whatever you do DO NOT TRY TO LEARN FROM THEM they have no consideration towards truth. You will actively damage your understanding of the world and ability to reason.
Sorry, no shortcuts to wisdom.
The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.
At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…
It's not "shackled" they are completely different technologies.
Imaging diagnosis assistance it something like computer vision -> feature extraction -> some sort of classifier
Don't be tricked by the magical marketing term AI. That's like assuming that a tick tac toe algorithm is the same thing as a spam filter because they're both "AI".
Also medical imaging stuff makes heaps of errors or extracts insane features like the style of machine used to image. They're getting better but image analysis is a relatively tractable problem.
no shortcuts to wisdom
Then what are drugs, then? Have you never tried (currently fashionable psychadelic)?
Even drugs aren’t actually a shortcut, they can just put you in a position to receive the information better. Some people trip and do a lot of introspective work, and others just zonk out and let themselves get distracted.
In my opinion, it’s how you use it, not what you use, that matters.
Shortcuts still require walking, just less of it. It's not a teleporter.
Though i agree with the rest.
Agreed, that’s a good way of saying what I was struggling to articulate.
This feels strange and i don't like it. Please disagree with me so we can fight.
Okie dokie: drugs are bad m’kay, and they’ve clearly addled your mind. Jesus Christ says put the joint down, you commie. Any rebuttal is automatically going to be discounted because it is clearly the product of your neurotic and addicted brain. Leave the big boy spirituality to us sober heads who will tell you exactly what to think and who to vote for, just like God intended. /s
(Thank you)
Yeah, because ketting some bronze age shit head do all the drugs for you then jacking off about his fevered hallucinations for a few thousand years as they turn from attempts at understanding to a fiction you violently impose upon the world is so much better.
What I’m getting from this exchange is that people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.
I’m okay with being pigeonholed in this way. Drink all the coffee you want, dude.
For me it's mostly the half baked technology angle. I've been in the tech industry for almost twenty years now and I've seen many, many cycles of hype. This one is less obviously dumb than NFTs but it still has all the same hallmarks.
Maybe it'll be useful at some point for summary or synthesis but for now it's just a neat toy.
people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.
This is an answer that resonates with me because it feels so correct.
Also, you know you can read a book in a coffee shop, right?
It’s the best of both worlds!
The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.
I equate it with doing those old formulas by hand in math class. If you don’t know what the formula does or how to use it, how do you expect to recall the right tool for the job?
Or in DND speak, it’s like trying to shoehorn intelligence into a wisdom roll.
That would be fine if LLM was a precise tool like a calculator. My calculator doesn’t pretend to know answers to questions it doesn’t understand.
Mine just lies to me.
And tells me to kill.
Leaks weird fluids. Looks and feels like blood, but smells like lavender honey, possessed of a taste like unexpectedly cutting yourself on broken glass as you escape parental discipline to meet a lover.
Hasn't screamed in a while, though. So that's nice. I guess if i keep it satisfied, i have to explain a lot less to my neighbors.
the irony is that LLMs are basically just calculators, horrendously complex calculators that operate purely on statistics..
science
Get out. /s
Please I beg people: think for yourselves.
I genuinely don't know how to help people who don't want to think for themselves 😖
Real. I mean I don't wanna think for myself but in a kinky way! not in a tech bro way.
You've never been to sf, have you?
"I thought when I told the AI what to think but smarter, I did myandatory thinking now its up to the robot!"
I can drink coffee while reading.
Who tf takes 58 minutes to drink that anyway? Would it not be cold?
He's sucking on the handle because some RAM told him that's best.
They didn't specify the amount of coffee. Maybe it's A LOT.
For real. 58 minutes is enough time for three coffees.
IMO it’s going to make a bunch of mushmind people if not used correctly(when are things used correctly these days) and I also think AI needs to go back in the box until it actually works properly.
This is jot AI and if you think it is you're not.
the only thing i've seen it do that is actually helpful is duckduckgo's summary thing, because it has to actually pull the text from a whitelist of sources and thus is very unlikely to just make things up
but even then i'd only use it for pretty simple things like "what's the total population of these cities", so that i can then click the sources it lists and check that everything seems sensible, trusting the answer without at least a quick sanity check is insane
It's been programmed to do what it does. imo that's the bare minimum of working properly - a program doing what you want it to do (from a dev standpoint)
"IQ benefits"? Lmao what fuckin nonsense. This shit aint making anyone smarter, if anything its robbing you of your ability to think critically.
It's garbage software with zero practical use. Whatever you're using AI for, just learn it yourself. You'll be better off.
"And then I drink coffee for 58 minutes" instead of reading a book, like that's a brag - just read a fuckin book, goddamn.
It's garbage software with zero practical use.
AI is responsible for a lot of slop but it is wrong to say it has no use. I helped my wife with a VBScript macro for Excel. There was no way I was going to learn VBScript. Chatgpt spit out a somewhat working script in minutes that needed 15 minutes of tweaking. The alternative would have been weeks of work learning a proprietary Microsoft language. That's a waste of time.
I never use these LLMs cause I have a brain and I'm not artistically inclined to use it for audiovisual creation, but today I thought 'why not?' and gave it a try. So I asked ChatGPT to provide me with 80 word biographies of the main characters of LOGH and, besides being vague, it made pretty big mistakes on pretty much every summary and went fully off the rails after the 4th character... It's not even debatable information (fiction books plus anime, no conflicting narratives here) and it's all easily available online. I can't even imagine relying on it for anything more serious than summing up biographies for anime characters, lol, cause even that it couldn't do right!
Asking a LLM something is the equivalent of asking strangers on the internet and allowing non-serious answers too
That's because that's what LLMs are trained on. Random comments from people on the internet, including troll posts and jokes which the LLM takes as factual most of the times.
Remember when Google trained their AI on reddit comments and it put out incredibly stupid answers like mixing glue in your cheese sauce to make it thicker?
Or that one time it suggested that people should eat a small rock every day because it was fed an Onion article?
The old saying: "Garbage in, garbage out." fits extremely well for LLMs. Considering the amount of data being fed to these LLMs it's almost impossible to sanitize them and the LLMs are nowhere close to being able to discern jokes, trolls or sarcasm.
Oh yea also it came out some researchers used LLMs to post reddit comments for an experiment. So yea, the LLMs are being fed with other LLM content too. It's pretty much a human-centipede situation.
But yea, I wouldn't trust these models for anything but the most simplest of tasks and even there I would be pretty circumspect of what they give me.
So this guy thinks books are typically ready in 2 hours...
58 minutes of drinking coffee.
That's somewhere around 100 to 400 miligrams of caffeine, depending on your brew and how fast you drink coffee.
Thats about 35 mg to 145ish mg of caffeine still in your system after 6 hours.
400 mg of caffeine in a day is the generally agreed upon dangerous limit of coffee.
So yeah, this dude is trading having a functioning brain and useful skills for... potentially OD'ing on caffeine, hypertension, diarrhea, addiction, etc.
Brilliant.
It's okay, his "agent AI" told him it was good for him and that he was brilliant for maximizing his body's fuel intake, or some shit.
... does anyone have a meme for:
'my body is a machine that turns coffee into projectile diarrhea and heart arrhythmia'
?
They both make stupid arguments. Who would replace reading a book with an AI? If I want information in a shorter format, I would not be looking for books in the first place (unless I need to reference pages/chapters, but then I won't be reading the whole thing anyway).