What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to "reasoning models" that allow them to break free of the inherent boundaries of the statistical methods they are based on?
Tobberone
joined 2 years ago
Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
"You can't just have Geralt for every single game" says his voice actor, and if you think The Witcher 4 making Ciri the protagonist is "woke," then "read the damn books"
A Swedish company deploying underwater tidal kites in the Faroe Islands, says 500 of them would supply 100% of Alaska's electricity needs.
Applying 'extreme heat' to lithium-ion batteries reportedly restores their capacity, and I think it's the sustainable tech breakthrough of 2025
It shouldn't be a surprise that LLM wants to get to the resolution of the plott quickly, all literature they've been fed always leads to the resolution. That it is in fact the suspension, the road to the solution, which is what keeps the story interesting isn't something an LLM can understand, because it never analyses the stories. Only what words are used.
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
view more: next ›
Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does "AI" escape the inherent limits of statistical inference?