Am I the only one who didn’t know that AI cannot figure out time? I mean, every day, we hear all about generative AI “revolutionizing” everything and replacing everyone. Pretty genius little things. So imagine my shock when I learned that multimodal AI models cannot tell time. How did I know, you ask?
To start with, researchers at the University of Edinburgh recently found that multimodal large language models (MLLMs) like ChatGPT-4o, GPT-o1, Gemini-2.0, and Claude 3.5-Sonnet ran into accuracy problems while reading a clock face.
Things got worse when they were tested with clocks designed with Roman numerals, a colored dial, or a decorative hour hand. Some of the clocks also had a hand that tracked seconds in addition to minutes and hours. In the face of those design touches, the AI models reportedly fell into further errors.
This discovery was made during a test of a lineup of top MLLMs today, and to think that Gemini-2.0 performed the “best” with only 22.8% accuracy sounds hilarious. GPT-4.o and GPT-o1’s exact match accuracy stood at 8.6% and 4.84% respectively.
Per the researchers, these models struggled with everything. Which hand is the hour hand? Which direction is it pointing? What angle corresponds to what time? What number is that? According to them, the more variation there was in the clock face, the more the chatbot being tested was likely to misread the clock.
These are literally basic skills for people. Most six or seven-year-olds can already tell time. But for these models, it might as well be the most complicated astrophysics.
After the clock fiasco, the researchers tested the bots on yearly calendars. You know, the ones with all twelve months on one page. GPT-o1 performed the “best” here, reaching 80 percent accuracy. But that still means that one out of every five answers was wrong, including simple questions like “Which day of the week is New Year’s Day? If my child failed to get that right on a quiz, I would honestly be very worried.
I never would have thought that AI models could ever get confused by a common calendar layout. But then, it is not very shocking to find out. It all still boils down to a long-standing gap in AI development. MLLMs only recognize patterns they have already seen, and clocks, calendars, or anything that requires spatial reasoning don’t fit into that.
Humans can look at a warped Dali clock and still figure out roughly what time it is meant to display. But AI models see a slightly thicker hour hand and kind of short-circuit.
It is easy (almost satisfying) to laugh at ChatGPT, Gemini, and these models for failing a task you learned when you were little. A task you do with so much ease. As someone who has gotten jilted by clients for the free work these things offer, albeit substandard, I admit I do find it really satisfying.
But as much as I want to just laugh it off, there is a more serious angle to this. These same MLLMs are being pushed into autonomous driving perception, medical imaging, robotics, and accessibility tools. They are being used for scheduling and automation as well as real-time decision-making systems.
Now, clock-reading errors are funny. But medical errors? Navigation errors? Even scheduling errors? Not so funny.
If a model cannot reliably read a clock, trusting it blindly in high-stakes environments is too risky a gamble for me. It just shows how far these systems still are from actual, grounded intelligence. And how much human common sense and nuance still matter. I am trying so hard to steer clear of taking this chance to make a human vs. AI case. I sure won’t use it to preach “Why I Hate AI and You Should Too.” But there is a problem that needs to be looked into.
As the study’s lead author, Rohit Saxena, put it, these weaknesses “must be addressed if AI systems are to be successfully integrated into time-sensitive real-world applications.”

