back to top

Tomorrow’s Doctors Must Learn to Think With A.I.

Share

A new study shows that artificial intelligence can outperform physicians at diagnosis,
not because it’s smarter, but because doctors haven’t yet been trained to use it well.
When a recent randomized trial asked physicians to solve complex diagnostic cases with the help of a large-language model (LLM), the results were surprising. The study, involving 50 doctors, a mix of residents and attending physicians in internal, family, and emergency
medicine, found that physicians using conventional diagnostic resources scored an average
of 74 percent on a reasoning rubric, while those given access to an A.I. chatbot alongside their usual tools scored 76 percent. The difference was statistically insignificant.

The unexpected twist came when the chatbot alone was tested on the same cases. Without
human help, it scored about 92 percent. In other words, the model outperformed doctors by a wide margin when working independently, even though it failed to boost their performance when used as a support tool.
The researchers concluded that the issue was not with the technology, but with how it was
used. Doctors had access to the chatbot, but many treated it like a search engine, asking
simple, direct questions rather than providing full case details and exploring its reasoning. As a result, they failed to benefit from its capacity to analyze complex clinical information and consider alternative diagnoses.
As a data scientist observing this field, I see a clear message: medical education must evolve so that physicians can engage with these systems effectively, not fear or ignore them. Tools like ChatGPT and other large-language models are not replacements for human expertise — they are amplifiers of it. But they only add value when users understand how to harness them.

Medicine’s New Literacy

Traditionally, medical training has emphasized memorization, recall, and mastery of vast bodies of knowledge. That approach made sense when access to information was limited. Today, however, medical knowledge is expanding at a rate no individual can keep up with (its estimated that medical knowledge doubles every 73 days). The focus should now shift from memorizing information to developing the skills needed to interpret, question, and collaborate with intelligent systems that can process it. The study illustrates that gap. It wasn’t that doctors lacked intelligence or experience — they lacked familiarity with how to think with A.I. The result mirrors what we’ve seen in other fields: when professionals don’t understand how to use a tool, they underuse it. But once properly
trained, their performance improves dramatically.

Teaching Future Physicians to Think With Technology

To close this gap, medical curricula should begin introducing foundational A.I. literacy. That
includes understanding how models generate outputs, what their limitations are, and how to evaluate their reasoning critically. Physicians should also learn principles of data science,
statistics, and human-machine collaboration — not to become programmers, but to remain
effective decision-makers in an era when digital reasoning will be part of every clinical workflow.
Equally important is training around cognitive bias. The study showed that many doctors
disregarded the chatbot’s suggestions when they contradicted their initial impressions. That
same tendency, overconfidence in a first hypothesis, is a well-documented cause of
diagnostic error. Teaching physicians how to use A.I. as a second-opinion partner could help
mitigate those biases rather than reinforce them.
The message from this research isn’t that A.I. will replace physicians, or that medicine has failed to keep up. It’s that modern medicine now depends on a new form of literacy — one that blends clinical reasoning with technological fluency. Just as past generations of doctors learned to read X-rays or interpret lab data, the next generation must learn to engage intelligently with artificial intelligence. A.I. isn’t here to challenge medical expertise; it’s here to extend it. The question is not whether we will trust machines, but whether we will teach our doctors to use them well.

Share

spot_img

Other news