Perspectivist

joined 1 month ago
[–] [email protected] 1 points 3 hours ago (1 children)

Is it not debt when you owe money to the bank?

[–] [email protected] 2 points 3 hours ago (1 children)

stainless hose clamp

That's exactly what I was about to do

[–] [email protected] 2 points 3 hours ago

I didn’t actually mean to remove that piece. The bolts and nuts had rusted away, and it fell off while I was hammering the old exhaust loose. I had to cut the remaining bolt shafts from the DPF flange and drill new holes for through-bolts.

[–] [email protected] 4 points 4 hours ago* (last edited 4 hours ago) (2 children)

You usually don’t lose anything by at least trying to fix something that’s already broken. At worst it'll just remain broken. Even if I don’t intend to repair it, I’ll still usually tend to disassemble it just to see what it looks like inside - and there’s often a part or two I can scavenge for the “DIY box” in case it comes in handy later. I also don’t worry too much about how pretty the fix is, as long as it works. Also, there's instructions for almost everything online. Just go for it.

 

Turns out the rattling was coming from the heatshield of the DPF so I spent a day replacing a part which wasn't even the source of the issue. Well, atleast I've got a new exhaust now and did some underbody rust prevention while I was at it.

[–] [email protected] 19 points 6 hours ago (3 children)

You can invest in index funds online. I passively earn over a month's salary that way every year.

[–] [email protected] 4 points 13 hours ago

The issue with giving up territory beyond the current front lines is that this would leave the Ukrainian fortified positions behind enemy lines and if Russia continues the war then there's nothing stopping them from advancing. That's why Russia doesn't want a ceasefire either because Ukraine would just use that time to further build up their current defensive positions making it even harder for Russia to advance in the future.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago)

As a self-employed general contractor-handyman-plumber, I feel pretty secure about the future of my work prospects. If anything, an AI that could reliably deliver correct information would be immensely useful in my line of work, given how I run into technical questions on a daily basis.

[–] [email protected] -1 points 1 day ago

No, it doesn’t make it conscious - but you also can’t prove that it isn’t, and that’s the problem. The only evidence of consciousness outside our own minds is when something appears conscious. Once an LLM reaches the point where you genuinely can’t tell whether you’re talking to a real person or not, then insisting “it’s not actually conscious” becomes just a story you’re telling yourself, despite all the evidence pointing the other way.

I’d argue that at that point, you should treat it like another person - even if only as a precaution. And I’d even go further: not treating something that seems that "alive" with even basic decency reflects quite poorly on them and raises questions about their worldview.

[–] [email protected] 5 points 1 day ago (1 children)

The article states that it was caught on surveillance camera.

[–] [email protected] 7 points 1 day ago* (last edited 1 day ago) (8 children)

That's actually kind of clever. I wouldn't immediately know how to counter this map on a debate.

Edit:

ChatGPT: On this map, distance from Australia to South America is absolutely enormous — thousands of kilometers longer than it is on a globe. Yet in reality, there are direct flights from Santiago to Sydney that take about 12–14 hours. On this map, those flights would be absurdly long or impossible. Airlines can’t be faking that because passengers time them, track them on GPS, and even bring their own fuel calculations.

[–] [email protected] 11 points 1 day ago

There's dozens of us

[–] [email protected] 19 points 1 day ago (3 children)

Seems justified to me. There's no way of telling a fake gun from a real one at a distance and if you point it at the police when they're telling you to drop it, that's just asking to get shot.

 
387
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

146
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power ~~once per crank revolution~~ when the wheel magnet passes the speed sensor. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

EDIT2: I figured it out. It wasn't the wheel sensor but related to it: I added a second spoke magnet for that sensor on the opposite side of the wheel and the problem went away. Apparently on low speeds the time between pulses got too long and the power to the motor was cut. In addition to this I also used my Eggrider app to tweak the motor settings so that it knows there's two magnets and not just one. The setting I tweaked is under "Bafang basic settings" and I changed the "Speed meter signal" from 1 to 2 to tell it that there's two magnets.

 

Olisi hyödyllistä tietoa seuraavia vaaleja ajatellen.

Ihmetyttää kyllä myös miten vähän tästä on Yle ainakaan mitään uutisoinut. Tuntuu melkein tarkoitukselliselta salamyhkäisyydeltä.

106
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

I figured I’d give this chisel knife a try, since it’s not like I use this particular knife for its intended purpose anyway but rather as a general purpose sharpish piece of steel. I’m already carrying a folding knife and a Leatherman, so I don’t need a third knife with a pointy tip.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›