minoscopede

joined 11 months ago
[–] [email protected] 14 points 4 days ago

πŸ’― we should all be very wary of voting machines. If it's not fully open source and cryptographically verifiable, it's not secure.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago)

That write up is much more than just "don't vote." It's about fully withdrawing from the system and rejecting citizenship, including all of the things that come with it, like paying taxes and owning private property.

If someone pays taxes, legitimizes the government, and also doesn't vote... Then that's likely the worst of both worlds from the author's perspective.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

I'd encourage you to research more about this space and learn more.

As it is, the statement "Markov chains are still the basis of inference" doesn't make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that's also unrelated because these models are not RL agents, they're supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it's not really used for inference.

I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

[–] [email protected] 67 points 1 week ago* (last edited 1 week ago) (22 children)

I see a lot of misunderstandings in the comments 🫀

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[–] [email protected] 19 points 1 week ago

Thank you for posting the article date! You are a treasure

[–] [email protected] 1 points 2 weeks ago

Beautiful! I'll definitely give this a go

[–] [email protected] 17 points 2 weeks ago (3 children)

python -m http.server is still my media server of choice. It's never let me down.

[–] [email protected] 4 points 3 weeks ago

On any site with unverified signups (all of them) you can't.

If you want to talk to real people, you'd have to use a platform that has in-person ID verification. Like a pub, or a park.

Good luck finding a bot free place on your phone. It'd have to involve zero-sum proofs and biometrics. And even then you can't really be sure that person isn't using a bot to write without full root access to their system and a live webcam feed.

[–] [email protected] 1 points 1 month ago

I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

I mostly agree, but "never" is too high a bar IMO. It's way, way higher than the bar even for humans. Maybe like 0.1% or something would be reasonable?

Even Einstein misremembered things sometimes.

[–] [email protected] 7 points 1 month ago* (last edited 1 month ago) (1 children)

Eh, certain parts of LA are safe. But LA is actually pretty conservative in other areas, due to a large religious population, and a lot of first-gen immigrants.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (2 children)

I have to ask: would this story be so popular if they didn't mention that the four people that did this were Chinese?

Racism doesn't disappear just because the article doesn't say the quiet part out loud. We all know the thought process that led to this article's virality.

Let's do better, Lemmy. We all have an opportunity to make the world a more tolerant and empathetic place through what we post and upvote.

view more: next β€Ί