Perspectivist

joined 1 month ago
[–] [email protected] 3 points 2 hours ago* (last edited 2 hours ago)

As a self-employed general contractor-handyman-plumber, I feel pretty secure about the future of my work prospects. If anything, an AI that could reliably deliver correct information would be immensely useful in my line of work, given how I run into technical questions on a daily basis.

[–] [email protected] 0 points 3 hours ago

No, it doesn’t make it conscious - but you also can’t prove that it isn’t, and that’s the problem. The only evidence of consciousness outside our own minds is when something appears conscious. Once an LLM reaches the point where you genuinely can’t tell whether you’re talking to a real person or not, then insisting “it’s not actually conscious” becomes just a story you’re telling yourself, despite all the evidence pointing the other way.

I’d argue that at that point, you should treat it like another person - even if only as a precaution. And I’d even go further: not treating something that seems that "alive" with even basic decency reflects quite poorly on them and raises questions about their worldview.

[–] [email protected] 4 points 3 hours ago (1 children)

The article states that it was caught on surveillance camera.

[–] [email protected] 3 points 5 hours ago* (last edited 5 hours ago) (6 children)

That's actually kind of clever. I wouldn't immediately know how to counter this map on a debate.

Edit:

ChatGPT: On this map, distance from Australia to South America is absolutely enormous — thousands of kilometers longer than it is on a globe. Yet in reality, there are direct flights from Santiago to Sydney that take about 12–14 hours. On this map, those flights would be absurdly long or impossible. Airlines can’t be faking that because passengers time them, track them on GPS, and even bring their own fuel calculations.

[–] [email protected] 8 points 6 hours ago

There's dozens of us

[–] [email protected] 16 points 7 hours ago (3 children)

Seems justified to me. There's no way of telling a fake gun from a real one at a distance and if you point it at the police when they're telling you to drop it, that's just asking to get shot.

[–] [email protected] 1 points 7 hours ago (2 children)

try to avoid answering that question because they get more money if they muddle the waters

I dont personally think this is quite fair either. Here's a quote from the first link:

According to Jang, OpenAI distinguishes between two concepts: Ontological consciousness, which asks whether a model is fundamentally conscious, and perceived awareness, which measures how human the system seems to users. The company considers the ontological question scientifically unanswerable, at least for now.

To me, as someone who has spent a lot of time thinking about consciousness (the fact of subjective experience) this seems like a perfectly reasonable take. Consciousness itself is entirely a subjective experience. There's zero evidence of it outside of our own minds. It can't be measured in any way. We can't even prove that other people are consciouss. It's a relatively safe assumption to make but there's no conclusive way to prove it. We simply assume they are because they seem like it.

In philosophy there's this concept of a "philosophical zombie" which means a creature which is outwardly indistinquishable from a human but it completely lacks any internal experience. This is basically what the robots in the TV series "west world" were - or at least so they thought.

This is all to say that there is a point after which AI system so convincingly mimics a conscious being that it's not entirely ridiculous thing to worry that what if it actually is like something to be this system and whether we're actually keeping a conscious being as a slave. If we had a way to prove that it is not consciouss then there's no issue there but we can't. People used to justify mistreatment of animals by claiming they're not consciouss either but very few people thinks that anymore. I'm not saying an LLM might be conscious, I'm relatively certain that they're not but they're also the most concsious seeming thing we've ever created and they'll just keep getting better and better. I'd say that there is a point after which these systems act consciouss so convincingly that one would need to basically be a psychopath to mistreat them.

[–] [email protected] 11 points 8 hours ago

The value of my portfolio dips too, but I don’t actually lose anything unless I sell. I just hold and wait for prices to recover - as they always have so far. In fact, when the market drops I buy even more, because the same money gets me more shares. People don’t lose their savings because of a crash; they lose them because they panic and sell for less than they paid.

[–] [email protected] 2 points 8 hours ago (4 children)

In this context, "AI bro" clearly refers to the creators, not the end users - which is what your link is about. Users aren’t the ones who “taught it to speak like a corporate middle manager.” That was the AI company leaders and engineers. When I asked who “they” are, I was asking for names. Someone tried to dodge by saying “AI bros and their fans,” but that phrase itself distinguishes between two groups. I wasn’t asking about the fans.

Let me rephrase: name a person responsible for training an AI to sound like a corporate middle manager who also believes their LLM is conscious.

[–] [email protected] 0 points 9 hours ago (6 children)

"They taught AI to talk like a middle manager.." isn't refering to the people at /r/MyBoyfirendIsAI. Those are users, not the creators of it.

[–] [email protected] -2 points 10 hours ago (8 children)

Show me one "AI bro" claiming LLMs are consciouss.

[–] [email protected] 4 points 10 hours ago

No. Loss of vision alone would be enough.

 
387
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

146
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power ~~once per crank revolution~~ when the wheel magnet passes the speed sensor. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

EDIT2: I figured it out. It wasn't the wheel sensor but related to it: I added a second spoke magnet for that sensor on the opposite side of the wheel and the problem went away. Apparently on low speeds the time between pulses got too long and the power to the motor was cut. In addition to this I also used my Eggrider app to tweak the motor settings so that it knows there's two magnets and not just one. The setting I tweaked is under "Bafang basic settings" and I changed the "Speed meter signal" from 1 to 2 to tell it that there's two magnets.

 

Olisi hyödyllistä tietoa seuraavia vaaleja ajatellen.

Ihmetyttää kyllä myös miten vähän tästä on Yle ainakaan mitään uutisoinut. Tuntuu melkein tarkoitukselliselta salamyhkäisyydeltä.

106
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

I figured I’d give this chisel knife a try, since it’s not like I use this particular knife for its intended purpose anyway but rather as a general purpose sharpish piece of steel. I’m already carrying a folding knife and a Leatherman, so I don’t need a third knife with a pointy tip.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›