Mniot

joined 3 months ago
[–] Mniot@programming.dev 23 points 1 day ago (2 children)

I think key context is that the guy is representing himself as having special knowledge about what Signal is doing internally and what they'll do next.

It's not "you bump into some rando on the street. Don't you know she's CEO of Signal??"

It's "you're giving a Ted Talk about Signal. The woman in the front row offers a correction and you're like, 'shut up, dummy.'"

[–] Mniot@programming.dev 23 points 1 day ago (2 children)

I'm downvoting because of your edit complaining about down-votes.

[–] Mniot@programming.dev 2 points 2 days ago

The current state of things is that they cover their faces and refuse to give any ID. Even fake ID.

I think if you followed the post suggestion and the result was that ICE would give fake names and fake badge-numbers, that would actually be positive because "agents lie about their identity" is something new and interesting. Then the strategy will need to change, but in the mean time it was useful.

[–] Mniot@programming.dev 17 points 6 days ago (3 children)

Iran's not shooting missiles in defense of Palestine, just in retaliation for Israel shooting at them.

But there's certainly a level of "oh, is blowing up an apartment building a bad thing? Then WTF have you doing???"

[–] Mniot@programming.dev 1 points 1 week ago (1 children)

This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

The whole paragraph is kinda FUD except for this. Normal research practice is to (get ready for a shock) do research and not just copy a high-level summary of what other people have done. If your professors were saying, "don't cite encyclopedias, which includes Wikipedia" then that's fine. But my experience was that Wikipedia was specifically called out as being especially unreliable and that's just nonsense.

I personally use ChatGPT like I would Wikipedia

Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how's it different from "some guy told me..."? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn't know themselves. :-(

[–] Mniot@programming.dev 2 points 1 week ago (1 children)

I think it was. When I think of Wikipedia, I'm thinking about how it was in ~2005 (20 years ago) and it was a pretty solid encyclopedia then.

There were (and still are) some articles that are very thin. And some that have errors. Both of these things are true of non-wiki encyclopedias. When I've seen a poorly-written article, it's usually on a subject that a standard encyclopedia wouldn't even cover. So I feel like that was still a giant win for Wikipedia.

[–] Mniot@programming.dev 8 points 1 week ago (6 children)

I think the academic advice about Wikipedia was sadly mistaken. It's true that Wikipedia contains errors, but so do other sources. The problem was that it was a new thing and the idea that someone could vandalize a page startled people. It turns out, though, that Wikipedia has pretty good controls for this over a reasonable time-window. And there's a history of edits. And most pages are accurate and free from vandalism.

Just as you should not uncritically read any of your other sources, you shouldn't uncritically read Wikipedia as a source. But if you are going to uncritically read, Wikipedia's far from the worst thing to blindly trust.

[–] Mniot@programming.dev 42 points 1 week ago

I don't think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called "complex") puzzles. Like Towers of Hanoi but with 25 discs.

The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don't have an answer for why this is, but they suspect that the reasoning doesn't scale.

[–] Mniot@programming.dev 1 points 2 weeks ago

Can't even ignore the pain of getting a post deleted...

[–] Mniot@programming.dev 3 points 2 weeks ago

But delete-instead-of-downvote is how you drive out the trolls. If you give shitty people a platform labeled "I think this person is wrong" then you've still given them a platform.

[–] Mniot@programming.dev 7 points 2 weeks ago

They said "please stop donating". Returning funds or organizing what to do with them is a bunch of work. If they're shutting down because running the instance is too much work and they feel hassled then I wouldn't begrudge them just keeping the few thousand left over.

[–] Mniot@programming.dev 31 points 2 weeks ago (1 children)

The title of this post is disappointing. The given answer is sound and it seems safe to assume it was arrived at by thinking mathematically.

 

"I found an entirely new way to get out of 'what do you want to get for dinner?'"

 

As opposed to "interactivity". I saw this in a post from wpb@lemmy.world: https://programming.dev/post/26779367/15573661

view more: next ›