Perspectivist

joined 1 month ago
[–] [email protected] 6 points 1 week ago* (last edited 1 week ago) (8 children)

That's actually kind of clever. I wouldn't immediately know how to counter this map on a debate.

Edit:

ChatGPT: On this map, distance from Australia to South America is absolutely enormous — thousands of kilometers longer than it is on a globe. Yet in reality, there are direct flights from Santiago to Sydney that take about 12–14 hours. On this map, those flights would be absurdly long or impossible. Airlines can’t be faking that because passengers time them, track them on GPS, and even bring their own fuel calculations.

[–] [email protected] 13 points 1 week ago

There's dozens of us

[–] [email protected] 19 points 1 week ago (3 children)

Seems justified to me. There's no way of telling a fake gun from a real one at a distance and if you point it at the police when they're telling you to drop it, that's just asking to get shot.

[–] [email protected] 0 points 1 week ago (2 children)

try to avoid answering that question because they get more money if they muddle the waters

I dont personally think this is quite fair either. Here's a quote from the first link:

According to Jang, OpenAI distinguishes between two concepts: Ontological consciousness, which asks whether a model is fundamentally conscious, and perceived awareness, which measures how human the system seems to users. The company considers the ontological question scientifically unanswerable, at least for now.

To me, as someone who has spent a lot of time thinking about consciousness (the fact of subjective experience) this seems like a perfectly reasonable take. Consciousness itself is entirely a subjective experience. There's zero evidence of it outside of our own minds. It can't be measured in any way. We can't even prove that other people are consciouss. It's a relatively safe assumption to make but there's no conclusive way to prove it. We simply assume they are because they seem like it.

In philosophy there's this concept of a "philosophical zombie" which means a creature which is outwardly indistinquishable from a human but it completely lacks any internal experience. This is basically what the robots in the TV series "west world" were - or at least so they thought.

This is all to say that there is a point after which AI system so convincingly mimics a conscious being that it's not entirely ridiculous thing to worry that what if it actually is like something to be this system and whether we're actually keeping a conscious being as a slave. If we had a way to prove that it is not consciouss then there's no issue there but we can't. People used to justify mistreatment of animals by claiming they're not consciouss either but very few people thinks that anymore. I'm not saying an LLM might be conscious, I'm relatively certain that they're not but they're also the most concsious seeming thing we've ever created and they'll just keep getting better and better. I'd say that there is a point after which these systems act consciouss so convincingly that one would need to basically be a psychopath to mistreat them.

[–] [email protected] 17 points 1 week ago

The value of my portfolio dips too, but I don’t actually lose anything unless I sell. I just hold and wait for prices to recover - as they always have so far. In fact, when the market drops I buy even more, because the same money gets me more shares. People don’t lose their savings because of a crash; they lose them because they panic and sell for less than they paid.

[–] [email protected] 0 points 1 week ago (4 children)

In this context, "AI bro" clearly refers to the creators, not the end users - which is what your link is about. Users aren’t the ones who “taught it to speak like a corporate middle manager.” That was the AI company leaders and engineers. When I asked who “they” are, I was asking for names. Someone tried to dodge by saying “AI bros and their fans,” but that phrase itself distinguishes between two groups. I wasn’t asking about the fans.

Let me rephrase: name a person responsible for training an AI to sound like a corporate middle manager who also believes their LLM is conscious.

[–] [email protected] -3 points 1 week ago (6 children)

"They taught AI to talk like a middle manager.." isn't refering to the people at /r/MyBoyfirendIsAI. Those are users, not the creators of it.

[–] [email protected] -5 points 1 week ago (9 children)

Show me one "AI bro" claiming LLMs are consciouss.

[–] [email protected] 5 points 1 week ago

No. Loss of vision alone would be enough.

[–] [email protected] 13 points 1 week ago

You're falsely assuming that everyone has to deal with the same emotional control issues as you do.

[–] [email protected] 1 points 1 week ago (12 children)

Who exactly are these "they" whose thinking LLMs are conscious?

[–] [email protected] 14 points 1 week ago

Luckily I get my water from the tap.

view more: ‹ prev next ›