foundry27 9 hours ago

I just tried the same puzzle in o3 using the same image input, but tweaked the prompt to say “don’t use the search tool”. Very similar results!

It spent the first few minutes analyzing the image and cross-checking various slices of the image to make sure it understood the problem. Then it spent the next 6-7 minutes trying to work through various angles to the problem analytically. It decided this was likely a mate-in-two (part of the training data?), but went down the path that the key to solving the problem would be to convert the position to something more easily solvable first. At that point it started trying to pip install all sorts of chess-related packages, and when it couldn’t get that to work it started writing a simple chess solver in Python by hand (which didn’t work either). At one point it thought the script had found a mate-in-six that turned out to be due to a script bug, but I found it impressive that it didn’t just trust the script’s output - instead it analyzed the proposed solution and determined the nature of the bug in the script that caused it. Then it gave up and tried analyzing a bit more for five more minutes, at which point the thinking got cut off and displayed an internal error.

15 minutes total, didn’t solve the problem, but fascinating! There were several points where if the model were more “intelligent”, I absolutely could see it reasoning it out following the same steps.

Shorn 4 hours ago

I asked ChatGPT about playing chess: it says tests have shown it makes an illegal move within 10 - 15 moves, even if prompted to play carefully and not make any illegal moves. It'll fail within the first 3 or 4 if you ask it play reasonably quickly.

That means, it can literally never win a chess match, given an intentional illegal move is an immediate loss.

It can't beat a human who can't play chess. It literally can't even lose properly. It will disqualify itself every time.

--

> It shows clearly where current models shine (problem-solving)

Yeh - that's not what's happening.

I say that as someone that pays for and uses an LLM pretty much every day.

--

Also - I didn't fact check any of the above about playing chess. I choose to believe.

  • red369 an hour ago

    I have tried playing chess with ChatGPT a couple of times recently, and I found it was making illegal moves after about 4 or 5 moves.

    The first few could be resolved by asking it to check its moves. After a few more, I was having to explain that knights can jump and therefore can’t be blocked. It was also trying to move pieces that weren’t there, onto squares alert occupied by its own pieces, and asking it to review was not getting anywhere. 10-15 moves is very optimistic, unless it’s counting each move by either side, i.e., White moves 5-8 times and Black moves 5-8 times. Even that seems optimistic, but the lower end could be right.

  • simonw 4 hours ago

    Preventing an LLM from making illegal moves should be very simple: provide it with tool access to something that tells it if a move is legal or not, then watch it iterate in a loop until it finds a move that it is allowed to make.

    I expect this would dramatically improve the chess playing abilities of the competent tool using models, such as O3.

    • toolslive 3 hours ago

      or just present it with the list of legal moves and force it to pick from said list.

      • simonw 3 hours ago

        I imagine there are points in a chess game, especially early on, where that list could have hundreds of moves - could use up a fair amount of tokens.

        • toolslive 2 hours ago

          Nope. The list is very limited. For the starting position: a3, a4, b3,b4,.......h3, h4, Na3, Nc3, Nf3, Nh3

          That's 20 moves. the size grows a bit in the early middle game, but then drops again in the endgame. There do exist rather artificial positions with more than 200 legal moves, but the average number of legal moves in a position is around 40.

freediver 16 hours ago

On a similar note, I just updated LLM Chess Puzzles repo [1] yesterday.

The fact that gpt-4.5 gets 85% correctly solved is unexpected and somewhat scary (if model was not trained on this).

[1] https://github.com/kagisearch/llm-chess-puzzles

  • Gimpei 13 hours ago

    Given that o3 is trained on the contents of the Internet, and the answers to all these chess problems are almost certainly on the Internet in multiple places, in a sense it has been weakly trained on this content. The question for me becomes: is the LLM doing better on these problems because it’s improving in reasoning, or is it simply improving in information retrieval.

    • globnomulous 4 hours ago

      And then there's the further question of where we draw the line in ourselves. One of my teachers -- a philosopher -- once said that real, actual thought is incredibly rare. He's a world-renowned expert but says he can count on one hand the number of times in his life that he felt he was thinking rather than remembering and reorganizing what he already knew.

      That's not to say "are you remembering or reasoning" means the same thing when applied to humans vs when it's applied to LLMs.

  • Hammershaft 2 hours ago

    It's getting incredibly difficult to find anything on the internet that these models weren't trained on, which is why recent llm tests have used so much secrecy and only shows a few sample questions.

  • alexop 16 hours ago

    Oh cool, I wonder how good 03 will be. While using 03, I noticed something funny: sometimes I gave it a screenshot without any position data. It ended up using Python and spent 10 minutes just trying to figure out where the figures were exactly.

aledalgrande 4 hours ago

On a sidenote, I tried to get Codex + O3 to make an existing sidebar toggable with Tailwind CSS and it made an abomination full of bugs. This is a classic "boilerplate" task I'd expect it to be able to do. Not sure if I'm doing it wrong but... a little bit more direct instructions to O4-mini and it managed. The cost was astronomical tho compared to Anthropic.

CSMastermind 6 hours ago

It's weird to me that the author says this behavior feels human because it's nothing like how I solve this puzzle.

At no point during my process would I be counting pixels in the image. It feels very clearly like a machine that mimics human behavior without understanding where that behavior comes from.

  • alexop 4 hours ago

    Yes, exactly. What I meant is that a human would also try every "tool" available. In the case of o3, the only tools it had were Python and Bing.

    But you are right. It does not actually understand anything. It is just a next-token predictor that happens to have access to Python and Bing.

Kapura 14 hours ago

So... it failed to solve the puzzle? That seems distinctly unimpressive, especially for a puzzle with a fixed start state and a limited set of possible moves.

  • IanCal 14 hours ago

    > That seems distinctly unimpressive

    I cannot understate how impressive this is to me, having been involved in ai research projects and robotics in years gone by.

    This is a general purpose model, given an image and human written request that then step by step analyses the image, iterates through various options, tries to write code to solve the problem and then searches the internet for help. It reads multiple results and finds an answer, checks to validate it and then comes back to the user.

    I had a robot that took ages to learn to plan tic tac toe by example and if the robot moved originally there was a solid chance it thought the entire world had changed and would freak out because it thought it might punch through the table.

    This is also a chess puzzle marked as very hard that a person who is good at chess should give themselves fifteen minutes to solve. The author of the chess.com blog containing this puzzle only solved about half of them!

    This is not an image analysis bot, it's not a chess bot, it's a general system I can throw bad english at.

    • dmurray 2 hours ago

      > This is also a chess puzzle marked as very hard that a person who is good at chess should give themselves fifteen minutes to solve. The author of the chess.com blog containing this puzzle only solved about half of them!

      I am human and I solved this before opening the blog post, because I've seen this problem 100 times before with this exact description. I don't understand why an LLM wouldn't have done the same, because pattern matching off things you saw on the internet is IIUC the main way LLMs work.

      (I am good at chess, but not world class. This is not a difficult mate in 2 problem: if I hadn't seen it, it would take a minute or so to solve, some composed 2-movers might take me 5 minutes).

      • dmurray 2 hours ago

        I just tried ChatGPT free with the prompt "There's a mate-in-two composed by Paul Morphy. What's the key move?". It searches and finds it immediately. But if I ask it not to search the internet, its response is incoherent (syntactically valid English and knows the names of the chess pieces, but otherwise hallucinated).

    • nathell 3 hours ago

      > This is also a chess puzzle marked as very hard that a person who is good at chess should give themselves fifteen minutes to solve.

      Is it, though? I play at around 1000 Elo – I have a long-standing interest in chess, but my brain invariably turns on fog of war that makes me not notice threats to my queen or something – and I solved it in something like one minute. It has very little moving parts, so the solution, while beautifully unobvious, can be easily brute-forced by a human.

    • alexop 14 hours ago

      Yes, I agree. Like I said, in the end it did what a human would do: google for the answer. Still, it was interesting to see how the reasoning unfolded. Normally, humans train on these kinds of puzzles until they become pure pattern recognition. That's why you can't become a grandmaster if you only start learning chess as an adult — you need to be a kid and see thousands of these problems early on, until recognizing them becomes second nature. It's something humans are naturally very good at.

      • kamranjon 12 hours ago

        I am a human and I figured this puzzle out in under a minute by just trying the small set of possible moves until I got it correct. I am not a serious chess player. I would have expected it to at least try the possible moves? I think this maybe lends credence to the idea that these models aren’t actually reasoning but are doing a great job of mimicking what we think humans do.

    • andoando 13 hours ago

      Im 1600 rated player and this took me 20 seconds to solve, is this really considered a very hard puzzle?

      The obvious moves dont work, you can see whites pawn moving forward is mate, and you can see black is essentially trapped and has very limited moves, so immediately I thought first move is a waiting move and theres only two options there. Block the black pawn moving and if bishop moves, rook takes is mate. So rook has to block, and you can see bishop either moves or captures and pawn moving forward is mate

      • IanCal 22 minutes ago

        I don't know, I didn't spot the answer and it's from a list of hard puzzles from a chess coach. The model also wasn't told it was mate in 2 (or even if a mate was possible), just to solve it and it was white to move.

        https://www.chess.com/blog/ThePawnSlayer/checkmate-in-two-pu...

        Although perhaps this is missing the point - the process and chain here in response to an image and a sentence is extremely impressive. You can argue it's not useful, or not useful for specific use cases but it's impressive.

      • bubblyworld 12 hours ago

        Agreed, I'm similar fide (not rated but ~2k lichess) and it took me a few seconds as well. Not a hard puzzle, for a regular chess player anyway.

    • otabdeveloper4 4 hours ago

      OpenAI is a commercial company and their product is to make anthropomorphic chat bots.

      Clever Hans at web-scale, so to say.

      So if you're impressed by a model that spent 10 minutes and single-digit dollars to not solve a problem that has been solved before, then I guess their model is working exactly as expected.

    • Kapura 13 hours ago

      I am sorry, but if this impresses you you are a rube. If this were a machine with the smallest bit of actual intelligence it would, upon seeing its a chess puzzle, remember "hey, i am a COMPUTER and a small set of fixed moves should take me about 300ms or so to fully solve out" and then do that. If the machine _literally has to cheat to solve the puzzle_ then we have made technology that is, in fact, less capable than we created in the past.

      "Well, it's not a chess engine so its impressive it-" No. Stop. At best what we have here is an extremely computationally expensive way to just google a problem. We've been googling things since I was literally a child. We've had voice search with google for, idk, a decade+. A computer that can't even solve its own chess problems is an expensive regression.

      • currymj 12 hours ago

        > "hey, i am a COMPUTER and a small set of fixed moves should take me about 300ms or so to fully solve out"

        from the article:

        "3. Attempt to Use Python When pure reasoning was not enough, o3 tried programming its way out of the situation.

        “I should probably check using something like a chess engine to confirm.” (tries to import chess module, but fails: “ModuleNotFoundError”).

        It wanted to run a simulation, but of course, it had no real chess engine installed."

        this strategy failed, but if OpenAI were to add "pip install python-chess" to the environment, it very well might have worked. in any case, the machine did exactly the thing you claim it should have done.

        possibly scrolling down to read the full article makes you a rube though.

      • mhh__ 13 hours ago

        If you mean write code to exhaustively search the solution space then they actually can do that quite happily provided you tell it you will execute the code for them

      • jncfhnb 12 hours ago

        Looks to me like it would have simulated the steps using sensible tools but didn’t know it was sandboxed out of using those tools? I think that’s pretty reasonable.

        Suppose we removed its ability to google and it conceded to doing the tedium of writing a chess engine to simulate the steps. Is that “better” for you?

      • bobsmooth 13 hours ago

        A computer program that has the agency to google a problem, interpret the results, and respond to a human was science fiction just 10 years ago. The entire field of natural language processing has been solved and it's insane.

        • otabdeveloper4 4 hours ago

          OpenAI's whole business is impressing you with whiz-bang sci-fi sound and fury.

          This is a bad thing because it means they gave up on solving actual problems and entered the snake oil business.

        • dimatura 8 hours ago

          Honestly, I think that if in 2020 you had asked me whether we would be able to do this in 2025, I would've guessed no, with a fairly high confidence. And I was aware of GPT back then.

demirbey05 2 hours ago

Because its trained on human data.

bitbasher 6 hours ago

Is this that impressive considering these models have probably been trained on numerous books/texts analyzing thousands of games (including morphy's)?

sMarsIntruder 16 hours ago

So, are we talking about OpenAI o3 model, right?

  • janaagaard 15 hours ago

    I was also confused. It looks like the article has been corrected, and now uses the familiar 'o3' name.

  • bcraven 16 hours ago

    >"When I gave OpenAI’s 03 model a tough chess puzzle..."

    Opening sentence

    • monktastic1 15 hours ago

      A little annoying that they use zero instead of o, but yeah.

bfung 13 hours ago

LLMs are not chess engines, similar to how they don’t really calculate arithmetic. What’s new? carry on.

  • triyambakam 4 hours ago

    Yeah it's rather annoying how people (maybe due to marketing) expect a generalized model to be able to be an expert in every domain.

awestroke 16 hours ago

O3 is massively underwhelming and is obviously tuned to be sycophantic.

Claude reigns supreme.

  • tomduncalf 15 hours ago

    Depends on the task I think. O3 is really effective at going off and doing research, try giving it a complex task which involves lots of browsing/searching and watch how it behaves. Claude cannot do anything like that right now. I do find O3’s tone of voice a bit odd

BXLE_1-1-BitIs1 13 hours ago

Nice puzzle with a twist of Zugzwang. Took me about 8 minutes, but it's been decades since I was doing chess.

cess11 2 hours ago

"o3 does not just spit out an answer. It reasons. It struggles. It switches tools. It self-corrects. Sometimes it even cheats, but only after exhausting every other option. That feels very human."

I've never met a human player that suddenly says 'OK, I need Python to figure out my next move'.

I'm not a good player, usually I just do ten minute matches against the weakest Stockfish settings so as not to be annoying to a human, and I figured this one out in a couple of minutes because there are very few options. Taking with the rook doesn't work, taking with the pawn also doesn't, so it has to be a non-taking move, and the king can't do anything useful so it has to be the rook and typically in these puzzles it's a sacrifice that unlocks the solution. And it was.

tgtweak 15 hours ago

I remember reading that got3.5-turbo instruct was oddly good at chess - would be curious what it outputs as a next two moves here.

ttoinou 15 hours ago

Where does this obsession over giving binary logic tasks to LLMs come from ? New LLM breakthroughs are about handling blurry logic, non precise requirements and spitting vague human realistic outputs. Who care how well it can add integers or solve chess puzzles ? We have decades of computer science on those topics already

  • Arainach 15 hours ago

    If we're going to call LLMs intelligent, they should be performant at these tasks as well.

    • ttoinou 14 hours ago

      We called our computers intelligent and couldnt do so many things LLMs can do now easily.

      But yeah calling them intelligent is a marketing trick that is very efficient

tough 16 hours ago

I've commited the 03 (zero-three) and not o3 (o-three) typo too, but can we rename it on the title please

  • dang 6 hours ago

    Fixed. Thanks!