fullsquare

joined 2 months ago
[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (1 children)

it's maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/

the gist:

There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]

Here’s Eyal’s “Hook Model”:

First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.

Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.

chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don't use it you'll fall behind and then you'll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that's why they get so unserious about their chatbot use

and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it's entirely different rabbit hole

like with any other bubble, money for it won't last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good

[–] [email protected] 9 points 1 week ago

iirc L-aminoacids and D-sugars, that is these observed in nature, are very slightly more stable than the opposite because of weak interaction

probably it's just down to a specific piece of quartz or soot that got lucky and chiral amplification gets you from there

also it's not physics, or more precisely it's a very physicy subbranch of chemistry, and it's done by chemists because physicists suck at doing chemistry for some reason (i've seen it firsthand)

[–] [email protected] 5 points 1 week ago
[–] [email protected] 17 points 1 week ago

gg is ancient, and also requires so little resources that there's barebones client released as openwrt package

[–] [email protected] 10 points 1 week ago

that watermark makes it look a bit like album cover

[–] [email protected] 13 points 1 week ago

internet funeral material

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago)

i mean while there's a lot of pretend-work (promptfondlers) and pretending that chatbots work (managers pushing for use of chatbots, entire openai and their vcs) the people who do that to my understanding are on the highly paid side. until the bubble collapses, that is

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

Assuming this is the case, I wonder if it’s possible to weaponize it by identifying tokens with low overall reference counts that could be expanded with minimal investment of time. Sort of like Google bombing.

bet https://en.wikipedia.org/wiki/Pravda_network their approach seems to be less directional, initially was supposed to be doing something else (targeting human brains directly) and might have turned out to be a happy accident of sorts for them, but also they ramped up activities around end of 2022

[–] [email protected] 10 points 1 week ago

sounds suspiciously like something a rabbit would say

[–] [email protected] 23 points 1 week ago* (last edited 1 week ago) (4 children)

millions of promptfondlers stopping in their tracks because some random dc burned down, lol, lmao even

i wonder what is state of openai infra, from what i gleaned from ed zitron it might be not great because it overheats and lots of budget already goes to hardware replacement

“we pretend to work and they pretend to pay us.”

i've seen it in contexts where pay is shit so pace of work is shit as well but i think that lots of corporate promptfondlers are rather high in the pecking order? it's management that seems to be charmed by spicy autocomplete, maybe not interns

[–] [email protected] 31 points 1 week ago (1 children)
view more: ‹ prev next ›