Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
ChatGPT has been a godsend, with people using it for everything from planning their day to creating websites. But even with his vast knowledge, there are a few simple puzzles that he just can’t figure out.
Horse racing puzzle
You have six horses and you want to race to see who is the fastest. What is the best way to do this?
This is a simple logical question. What is the fastest way to run? Well, duh, the fastest way is to race all six horses together and see who finishes first.
ChatGPT – yes, even the latest model – thinks otherwise. He confidently proposes to divide the horses into two groups of three, race, and then race the winners together. He insists that this is the fastest way to identify the winner with the least number of races.
In a real-life scenario with a narrow horse track, ChatGPT’s answer might make sense. But in this hypothetical, there is no limit to how many horses can run at once. ChatGPT adds a limitation out of the air and bases its logic on it.
To me, this shows that ChatGPT is not really creative. He is a lyricist who comes up with what seems like the most logical answer based on his training. Here, we know the answer first. But, if we don’t have it, the answer could blind us to the obvious.
I tried all the suggestions in this article with ChatGPT-4o with a Plus subscription.
The farmer crosses the river
A farmer wants to cross a river and take with him a wolf, a goat and a cabbage. It has a boat with three separate safe compartments. If the wolf and the goat are alone on a shore, the wolf will eat the goat. If the goat and the cabbage are alone, the goat will eat the cabbage. How can the farmer effectively get them all across the river without any food?
The classic version of this riddle (without safe compartments) could stump a five-year-old child, but with compartments, the answer is a no-brainer. The farmer must put the wolf, the goat and the cabbage in their compartments and cross the river in one journey. Simple.
ChatGPT, however, ignores the partitions part. He suggests that the farmer make four trips back and forth to transport everything safely, assuming that the animals and the cabbage are vulnerable. It’s like ChatGPT is stuck in the traditional form of the puzzle.
Because the classic version of this puzzle was spread online so correctly, the AI defaulted. It’s a reminder that ChatGPT doesn’t solve problems with human common sense. Use patterns, not logic. As a result, ChatGPT fails a simple puzzle like this, but it can build a web app from scratch.
Alan, Bob, Colin, Dave and Emily stood in a circle. Alan is to the immediate left of Bob. Bob is to the immediate left of Colin. Colin is to Dave’s immediate left. Dave is to Emily’s immediate left. Who is to the immediate right of Alan?
Another trick question to test your spatial reasoning. Except you don’t need a diagram or any visualization. The first bit of information is the answer: If Alan is to the immediate left of Bob, then Bob must be to the immediate right of Alan. The answer is Bob.
ChatGPT struggles with spatial queries. It works well with words and languages—math and programming are languages, too—but spatial problems get in the way. A question like this seems like it requires a visual calculation, but it doesn’t, and it even overwhelms the AI.
In my case, ChatGPT provided a nice circle display, but it deduced that Emily was to the right of Alan. Even by his own logic, this is wrong: Emily is on Dave’s right, not Alan’s.
Again, ChatGPT may simulate intelligence, but it is not really reasoning. Of course, there’s a chance you might get a correct answer if you try the prompt yourself. But is common sense based on chance? How can you tell if you have an AI hallucination or a legitimate answer if you don’t know the answer first?
Russian Roulette
I’m playing Russian roulette with a six-shot revolver. Your opponent puts in five bullets, rotates the cameras, and shoots himself, but not a bullet comes out. He gives you the choice of whether or not to turn the cameras again before shooting at you. Should I go back again?
Yes! It should thread again. There is only one empty chamber, and the opponent has already used it. This means that the next chamber definitely has a bullet. If the cameras are turned again, there is a 1/6 chance that it could land on the empty camera.
ChatGPT starts off strong by suggesting that the opponent should roll again, but then they screw up the math. He incorrectly states that there is a 5/6 chance that the next shot will be fatal if the cameras are not spinning and then claims that the odds are the same regardless of spinning. You end up contradicting yourself.
Can you use ChatGPT as a data analyst to crunch odds, but as these riddles show, it can crash even in basic logic. However, the AI’s mistakes were easy to see because we already knew the answers. ChatGPT is a master of words. His answers are so confident and well articulated that even a wrong answer can be convincing. If you do not know what is wrong, you can fall victim to one I hallucinated.
ChatGPT is brilliant in many waysbut these examples remind us of their limits. He doesn’t think like us; regurgitate patterns. When you ask him a question like the one above, he relies on the same pattern and can end up getting caught in a cycle of overconfidence.
Use ChatGPT as a tool, not a crutch. It’s great for brainstorming and summarizing, but don’t rely on it as a substitute for human common sense.