this post was submitted on 16 Jun 2025
66 points (100.0% liked)

Fuck AI

3139 readers
1054 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

This is a paper for a MIT study. Three groups of participants where tasked to write an essay. One of them was allowed to use a LLM. These where the results:

The participants mental activity was also checked repeatedly via EEG. As per the papers abstract:

EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 3 days ago* (last edited 3 days ago) (4 children)

The biggest flaw in this study is that the LLM group wasn’t ~~allowed~~ explicitly permitted to edit their essays and was explicitly forbidden from altering the parameters. Of course brain activity looks low if you just copy-paste a bot’s output without thinking. That’s not "using a tool"; that’s outsourcing cognition.

If you don’t bother to review, iterate, or humanize the AI’s output, then yeah... it’s a self-fulfilling prophecy: no thinking in, no thinking out.

In any real academic setting, “fire-and-forget” turns into “fuck around and find out” pretty quick.

LLMs aren’t the problem; they’re tools. Even journal authors use them. Blaming the tech instead of the lazy-ass operator is like saying:

These people got swole by hand-sawing wood, but this pudgy fucker used a power saw to cut 20 pieces faster; clearly he’s doing it wrong.

No, he’s just using better tools. The problem is if he can’t build a chair afterward.

[–] [email protected] 5 points 3 days ago* (last edited 3 days ago) (1 children)

The biggest flaw in this study is that the LLM group wasn’t allowed to edit their essays

I didn't read the whole thing but only skimmed through the protocol. I only spotted

"participants were instructed to pick a topic among the proposed prompts, and then to produce an essay based on the topic's assignment within a 20 minutes time limit. Depending on the participant's group assignment, the participants received additional instructions to follow: those in the LLM group (Group 1) were restricted to using only ChatGPT, and explicitly prohibited from visiting any websites or other LLM bots. The ChatGPT account was provided to them. They were instructed not to change any settings or delete any conversations."

which I don't interpret as no editing. Can you please share where you found that out?

The biggest flaw in this study is that the LLM group wasn’t allowed to edit their essays

[–] [email protected] -2 points 3 days ago* (last edited 3 days ago)

Lol, oops, I got poo brain right now. I inferred they couldn't edit because the methodology doesn't say whether revisions were allowed.

What is clear, is they weren't permitted to edit the prompt or add personalization details seems to imply the researchers weren't interested in understanding how a participant might use it in a real setting; just passive output. This alone undermines the premise.

It makes it hard to assess whether the observed cognitive deficiency was due to LLM assistance, or the method by which it was applied.

The extent of our understanding of the methodology is that they couldn't delete chats. If participants were only permitted to a a one-shot generation per prompt, then there's something wrong.

But just as concerning is the fact that it isnt explicitly stated.

load more comments (2 replies)