this post was submitted on 22 Jun 2025
108 points (73.9% liked)

Programming Humor

3190 readers
1 users here now

Related Communities [email protected] [email protected] [email protected] [email protected]

Other Programming Communities [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 5 days ago (8 children)

4O got wrecked. My ai fan friend said O3 is their reasoning model so it means nothing. I don't agree but can't find proof.

Has someone done this with O3?

[–] [email protected] 18 points 5 days ago (7 children)

It’s a fundamental limitation of how LLMs work. They simply don’t understand how to follow a set of rules in the same way as a traditional computer/game is programmed.

Imagine you have only long-term memory that you can’t add to. You might get a few sentences of short-term memory before you’ve forgetting the context of the beginning of the conversation.

Then add on the fact that chess is very much a forward-thinking game and LLMs don’t stand a chance against other methods. It’s the classic case of “When all you have is a hammer, everything looks like a nail.” LLMs can be a great tool, but they can’t be your only tool.

[–] [email protected] 5 points 5 days ago (1 children)

Or: If it's possible to create a simple algirithm, that will always be infinitely more accurate than ML.

[–] [email protected] 9 points 5 days ago

That is because the algorithm has an expected output that can be tested and verified for accuracy since it works consistently every time. If there appears to be inconsistency, it is a design flaw in the algorithm.

load more comments (5 replies)
load more comments (5 replies)