doomer
What is Doomer? :(
It is a nebulous thing that may include but is not limited to Climate Change posts or Collapse posts.
Include sources when applicable for doomer posts, consider checking out [email protected] once in awhile.
i'm begging you
Literally plugged into a computer for 8 hours a day for work jackass
Literally plugged into a computer
Are you a nuralink test subject or something.
You only think it looks similar because the computer can put it into words that you relate to. Humans figure stuff out with words and language, it's how our brain processes things. So when you see something explain its processes in natural sounding language it's going to trick you into thinking it's conscious the same way you are
The brain came first, and people thought to model a program based on a very simplistic view of how it works. Your brain is infinitely more complex than any llm, don't let it convince you that it is on your level. You are not an llm; DeepSeek is a pale imitation of you. You are a human. For better and worse.
Well my fears are pretty much based on what other people think/believe (mainly authoritative people), because they’ll be the ones to decide when to use these things to replace humans.
The widespread adoption by the public also doesn’t make me hopeful either. I can certainly envision a future where musk partners his neural link with OpenAI to create the most effective wage slave in history.
the most effective wage slave in history
Look at tesla and spaceX. Consider how each of those domains' mechanical systems are incredibly simple compared to the complexity of the brain and emergent phenomena like consciousness.
Its implementation would be so riddled with unmaintainable and unportable shit that there's no way they'd reach this ideal, let alone graduate above medicalized torture into productive medicalized torture
I think, like a lot of current 'market disruptors' AI firms are generally running at a loss. I think a lot of them will go out of business within the next few years/ the next big economic downturn. It sucks right now, but things will get better on this front I think.
We're a long, long way away from anything like that. Those technologies are nacent and almost entirely marketing hype to inflate stonks prices.
At most, combining those two technologies would be useful for customer service type jobs (think AI-augmented customer appeasement scripts) but people require too much maintenance, which is expensive — capital would sooner get rid of them altogether.
Seems like researchers are getting uncomfortably good at mimicking consciousness
It's not even mimicry. It's not even close.
free will doesn't exist but my suffering does, riddle me that!
I prescribe logging off and hanging out with human beings.
You don't need to be fucking around with guessing machines just because they're Chinese now. They're fucking pointless for most people just go be a human.
Maybe it would help you to read Emily M. Bender's Thought Experiment in the National Library of Thailand. Basically, LLMs only deal with symbols, you deal with both symbols and their referents. You are not the same.
""""""""""AI"""""""""" is not in any conceivable way actually intelligent. "Generative" AI is a just a very elaborate and extremely, extremely bloated reverse compression algorithm.
Think of how an image can be stored as a jpeg, which "compresses" the image's information using various mathematical tricks, but in doing so causes the result to lose information. An approximation of the original image can be created from the compressed data, because the jpeg algorithm can extend from the stored data to make a 'guess' at the data lost in compression (actually a mathematical process which, given the same input, has the same results each time). If the image undergoes too much compression, it results in the classic blocky jpeg artifacts look, because too much information has been lost.
But in theory, if you examined a bunch of compressed data, you could design an algorithm that could create "new" images that blended different pieces of compressed data together. The algorithm is still purely deterministic, there is no thinking or creativity going on, it's just reassembling compressed data along strictly mathematically driven lines. If you then change how much value ("weight") it puts on different pieces of compressed data, and added a way to "prompt" it to fine-tune those values on the fly, you could design an algorithm that seemingly responds to your inputs in some sort of organic fashion but is actually just retrieving data in totally mathematically specified ways. Add in billions and billions of pieces of stolen text to render down into compressed slop, store and run the "weights" of the algorithm on some of the largest data centers ever built using billions of watts of power, run it again and again while tuning its weights by trial and error until it spits out something resembling human writing, and you have an LLM.
It seems to "generate" sentences by just spitting out a sequence of the mathematically most probably words and phrases in order, the way a jpeg algorithm assembles blocks of color. Nothing inside it is "creating" anything new, it's just serving up chopped up pieces of data, which were scraped from the real world in the first place. And any gaps in actual data cause by the compression, it papers over with a glue of most likely approximations, which is how you get those classic totally absurd non-sequitur answers that are frequently posted, or how you end up with a family getting poisoned after relying on an AI-generated mushroom identification guide.
To be absolutely clear, this is not even in the category of objects that could exhibit sentience. A metal thermostat connected to a heater, that maintains the temperature of a room by expanding and shrinking enough to close or open an electrical circuit, is closer to an intelligent lifeform than this. Any single celled organism is infinitely closer to sentience than this. Just because it can drool out a slurry of text your brain perceives as human, remember: 1. It's decompressing an amalgamation of billions of real humans' written text and 2. Your brain perceives this -> :) <- as a face. It's really easy to trick people into thinking inanimate things are sentient, humans invented gods to explain the seasons and the weather after all.
The term "generative" is actually just a technical term to do with statistical models.
You can do drugs. AI can't do drugs. Case closed.
what if i cooled my cpu w bong hits tho
We are all just biological machines that react to and accumelate data about our material conditions but it isn't worth dwelling on this. The question of free-will has been hashed out for millenia and the answer is that it doesn't really matter.
There's something besides emotion and thought, right? Something that's there even if you don't think, don't feel. Something that's watching. The "you" that just exists, just experiences. That's consciousness. What AI mimickes is not consciousness itself but it's functionality. So, chill.
Another interesting thing: Wernicke's Aphasia. The part of the brain that generates speech works independently from the area that gives speech coherence and meaning. YouTube link: https://www.youtube.com/watch?v=3oef68YabD0 Maybe a bit of an oversimplification (my own, human language model sucks lol), but just look it up if you want to know more.
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
I'm not a big fan of being an emergent phenomenon either tbh
AI can't shitpost so i win
We're all just children of momentum. Don't think too hard about it.