Perspectivist

joined 1 month ago
[–] [email protected] 8 points 1 day ago* (last edited 1 day ago) (2 children)

But the title had "AI" in it.

Also to answer your question: https://lemvotes.org/post/programming.dev/post/36238264

[–] [email protected] 3 points 2 days ago

This is unrealistic. How is it moving forward when the rear wheel is in the air??

[–] [email protected] 2 points 2 days ago

I try and avoid taking jobs for monday and friday as much as possible so 3 - 4 day weekends are somewhat regular thing for me. Benefits of being self-employed.

[–] [email protected] 3 points 2 days ago

Every single person on earth. There’s nothing more inherently human than caring about what others think.

I get what you mean though - I also try not to let other people’s judgment affect what I do, but I’d be lying to myself if I claimed it doesn’t, and especially if I claimed I don’t care.

[–] [email protected] 12 points 2 days ago (1 children)

LLMs, as the name suggests, are language models - not knowledge machines. Answering questions correctly isn’t what they’re designed to do. The fact that they get anything right isn’t because they “know” things, but because they’ve been trained on a lot of correct information. That’s why they come off as more intelligent than they really are. At the end of the day, they were built to generate natural-sounding language - and that’s all. Just because something can speak doesn’t mean it knows what it’s talking about.

[–] [email protected] 4 points 3 days ago (1 children)

This works both ways: you can also always claim it's a deepfake even if it isn't.

If the government is after you they don't need excuses so I doubt gen-AI changes anything in that regard.

[–] [email protected] 4 points 3 days ago (1 children)

Well clearly, if everyone was genuinely fine with it then no harm done – but the fact there was a “men’s shed” group to begin with shows there was a need for one, and letting women in kind of defeats the purpose.

There’s no doubt the group dynamic changes when a woman is present. Virtually every man has experienced this firsthand. And while not a perfect equivalent, if a women’s gym started accepting men, I doubt anyone would call that progress. It was a women-only gym for a reason.

[–] [email protected] 7 points 3 days ago* (last edited 3 days ago) (2 children)

The fact that this is surprising to you only further highlights the need for men's spaces in the first place. I bet you didn't know that men cry too. And ofcourse it's a blahaj user shaming them for it. Fuck men, right?

[–] [email protected] 3 points 3 days ago (7 children)

How to stop men from talking about their issues to eachother.

[–] [email protected] 6 points 3 days ago

The guys in front of and behind you are thinking the exactly same thing.

[–] [email protected] 10 points 3 days ago* (last edited 3 days ago)
  • I prefer to operate the clutch and shifting on my truck myself.
  • I'll rather do manual labor than any work that involves sitting on a computer.
  • I'm chronically online but without a smartphone addiction.
  • I prefer long-form media.
[–] [email protected] 8 points 3 days ago

Firefox

The less apps I have on my phone the better and I mostly browse Lemmy on desktop anyway.

 

I knew the technique but I had never tried this before so this is the first piece I've ever made. Surprisingly meditative activity.

I think I'll try nettles next. Should produce an actually usable product.

 

Turns out the rattling was coming from the heatshield of the DPF so I spent a day replacing a part which wasn't even the source of the issue. Well, atleast I've got a new exhaust now and did some underbody rust prevention while I was at it.

 
388
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

146
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power ~~once per crank revolution~~ when the wheel magnet passes the speed sensor. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

EDIT2: I figured it out. It wasn't the wheel sensor but related to it: I added a second spoke magnet for that sensor on the opposite side of the wheel and the problem went away. Apparently on low speeds the time between pulses got too long and the power to the motor was cut. In addition to this I also used my Eggrider app to tweak the motor settings so that it knows there's two magnets and not just one. The setting I tweaked is under "Bafang basic settings" and I changed the "Speed meter signal" from 1 to 2 to tell it that there's two magnets.

 

Olisi hyödyllistä tietoa seuraavia vaaleja ajatellen.

Ihmetyttää kyllä myös miten vähän tästä on Yle ainakaan mitään uutisoinut. Tuntuu melkein tarkoitukselliselta salamyhkäisyydeltä.

106
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

I figured I’d give this chisel knife a try, since it’s not like I use this particular knife for its intended purpose anyway but rather as a general purpose sharpish piece of steel. I’m already carrying a folding knife and a Leatherman, so I don’t need a third knife with a pointy tip.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›