this post was submitted on 29 Jan 2025
44 points (95.8% liked)

doomer

968 readers
25 users here now

What is Doomer? :(

It is a nebulous thing that may include but is not limited to Climate Change posts or Collapse posts.

Include sources when applicable for doomer posts, consider checking out [email protected] once in awhile.

founded 3 years ago
MODERATORS
 

Seems like researchers are getting uncomfortably good at mimicking consciousness- so much so that it’s beginning to make me question the deepest parts of my own brain. Perhaps the difference between AI and myself is that I am only prompted by outward stimuli? It seems as though that is what makes the intelligence “artificial”

Christ even as I type this, my thought processes mimic, for example, deepseek’s exposed deep think capabilities. Fuck idk if I’ll be able to unsee it. Seems like the only thing we have on it right now is emotion and even that seems to be in danger

My final takeaway from all of this is that we are in hell

all 29 comments
sorted by: hot top controversial new old
[–] [email protected] 39 points 5 months ago (1 children)
[–] [email protected] 10 points 5 months ago (1 children)

Literally plugged into a computer for 8 hours a day for work jackass

[–] Canonical_Warlock 1 points 5 months ago

Literally plugged into a computer

Are you a nuralink test subject or something.

[–] [email protected] 33 points 5 months ago

You only think it looks similar because the computer can put it into words that you relate to. Humans figure stuff out with words and language, it's how our brain processes things. So when you see something explain its processes in natural sounding language it's going to trick you into thinking it's conscious the same way you are

[–] [email protected] 32 points 5 months ago (1 children)

The brain came first, and people thought to model a program based on a very simplistic view of how it works. Your brain is infinitely more complex than any llm, don't let it convince you that it is on your level. You are not an llm; DeepSeek is a pale imitation of you. You are a human. For better and worse.

[–] [email protected] 8 points 5 months ago (3 children)

Well my fears are pretty much based on what other people think/believe (mainly authoritative people), because they’ll be the ones to decide when to use these things to replace humans.

The widespread adoption by the public also doesn’t make me hopeful either. I can certainly envision a future where musk partners his neural link with OpenAI to create the most effective wage slave in history.

[–] [email protected] 15 points 5 months ago

the most effective wage slave in history

Look at tesla and spaceX. Consider how each of those domains' mechanical systems are incredibly simple compared to the complexity of the brain and emergent phenomena like consciousness.

Its implementation would be so riddled with unmaintainable and unportable shit that there's no way they'd reach this ideal, let alone graduate above medicalized torture into productive medicalized torture

[–] [email protected] 12 points 5 months ago

I think, like a lot of current 'market disruptors' AI firms are generally running at a loss. I think a lot of them will go out of business within the next few years/ the next big economic downturn. It sucks right now, but things will get better on this front I think.

[–] [email protected] 11 points 5 months ago

We're a long, long way away from anything like that. Those technologies are nacent and almost entirely marketing hype to inflate stonks prices.

At most, combining those two technologies would be useful for customer service type jobs (think AI-augmented customer appeasement scripts) but people require too much maintenance, which is expensive — capital would sooner get rid of them altogether.

[–] [email protected] 17 points 5 months ago

Seems like researchers are getting uncomfortably good at mimicking consciousness

It's not even mimicry. It's not even close.

[–] [email protected] 15 points 5 months ago

free will doesn't exist but my suffering does, riddle me that!

[–] [email protected] 15 points 5 months ago

I prescribe logging off and hanging out with human beings.

You don't need to be fucking around with guessing machines just because they're Chinese now. They're fucking pointless for most people just go be a human.

[–] [email protected] 14 points 5 months ago

Maybe it would help you to read Emily M. Bender's Thought Experiment in the National Library of Thailand. Basically, LLMs only deal with symbols, you deal with both symbols and their referents. You are not the same.

[–] [email protected] 12 points 5 months ago (1 children)

""""""""""AI"""""""""" is not in any conceivable way actually intelligent. "Generative" AI is a just a very elaborate and extremely, extremely bloated reverse compression algorithm.

Think of how an image can be stored as a jpeg, which "compresses" the image's information using various mathematical tricks, but in doing so causes the result to lose information. An approximation of the original image can be created from the compressed data, because the jpeg algorithm can extend from the stored data to make a 'guess' at the data lost in compression (actually a mathematical process which, given the same input, has the same results each time). If the image undergoes too much compression, it results in the classic blocky jpeg artifacts look, because too much information has been lost.

But in theory, if you examined a bunch of compressed data, you could design an algorithm that could create "new" images that blended different pieces of compressed data together. The algorithm is still purely deterministic, there is no thinking or creativity going on, it's just reassembling compressed data along strictly mathematically driven lines. If you then change how much value ("weight") it puts on different pieces of compressed data, and added a way to "prompt" it to fine-tune those values on the fly, you could design an algorithm that seemingly responds to your inputs in some sort of organic fashion but is actually just retrieving data in totally mathematically specified ways. Add in billions and billions of pieces of stolen text to render down into compressed slop, store and run the "weights" of the algorithm on some of the largest data centers ever built using billions of watts of power, run it again and again while tuning its weights by trial and error until it spits out something resembling human writing, and you have an LLM.

It seems to "generate" sentences by just spitting out a sequence of the mathematically most probably words and phrases in order, the way a jpeg algorithm assembles blocks of color. Nothing inside it is "creating" anything new, it's just serving up chopped up pieces of data, which were scraped from the real world in the first place. And any gaps in actual data cause by the compression, it papers over with a glue of most likely approximations, which is how you get those classic totally absurd non-sequitur answers that are frequently posted, or how you end up with a family getting poisoned after relying on an AI-generated mushroom identification guide.

To be absolutely clear, this is not even in the category of objects that could exhibit sentience. A metal thermostat connected to a heater, that maintains the temperature of a room by expanding and shrinking enough to close or open an electrical circuit, is closer to an intelligent lifeform than this. Any single celled organism is infinitely closer to sentience than this. Just because it can drool out a slurry of text your brain perceives as human, remember: 1. It's decompressing an amalgamation of billions of real humans' written text and 2. Your brain perceives this -> :) <- as a face. It's really easy to trick people into thinking inanimate things are sentient, humans invented gods to explain the seasons and the weather after all.

[–] [email protected] 2 points 5 months ago

The term "generative" is actually just a technical term to do with statistical models.

[–] [email protected] 11 points 5 months ago (1 children)

You can do drugs. AI can't do drugs. Case closed.

[–] [email protected] 5 points 5 months ago

what if i cooled my cpu w bong hits tho

[–] [email protected] 10 points 5 months ago* (last edited 5 months ago)

We are all just biological machines that react to and accumelate data about our material conditions but it isn't worth dwelling on this. The question of free-will has been hashed out for millenia and the answer is that it doesn't really matter.

[–] [email protected] 10 points 5 months ago (1 children)

There's something besides emotion and thought, right? Something that's there even if you don't think, don't feel. Something that's watching. The "you" that just exists, just experiences. That's consciousness. What AI mimickes is not consciousness itself but it's functionality. So, chill.

Another interesting thing: Wernicke's Aphasia. The part of the brain that generates speech works independently from the area that gives speech coherence and meaning. YouTube link: https://www.youtube.com/watch?v=3oef68YabD0 Maybe a bit of an oversimplification (my own, human language model sucks lol), but just look it up if you want to know more.

[–] [email protected] 3 points 5 months ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[–] [email protected] 8 points 5 months ago

I'm not a big fan of being an emergent phenomenon either tbh

[–] [email protected] 7 points 5 months ago

AI can't shitpost so i win

[–] [email protected] 5 points 5 months ago

We're all just children of momentum. Don't think too hard about it.