Dr. Nicholas Dodaro believes in healthcare innovation that delivers a high-quality, patient-centered experience. His career has focused on harnessing transformational developments in which clinical and non-clinical teams work together to make meaningful impacts on the quality of one's life. His experience of bridging high acuity bedside medicine to real-world financial and business decisions has given him insight into the machine we call health care. Dr. Dodaro believes in simplifying the healthcare experience to one that creates access to people within the context of their needs and comfort.
The world is rushing toward artificial intelligence with the kind of excitement usually reserved for miracle cures. Every day, new headlines promise that AI will revolutionize healthcare, eliminate diagnostic errors, and personalize every aspect of medicine. The problem is that in this rush, people are beginning to believe AI can solve everything.
That belief isn’t new. History shows that when a technology arrives with promise, we tend to “throw it” at every problem. Only after experiencing some bad outcomes and damaging our credibility do we learn to use it properly — within the boundaries that make it effective and safe. With AI, and particularly with large language models like ChatGPT, we are still in that early, exuberant phase.
Many users don’t understand how these systems actually work. They imagine intelligence and judgment where there is only language prediction. AI doesn’t “know” or “understand” in the human sense — it identifies patterns in data and produces words that are statistically likely to follow. That’s an extraordinary technical feat, but it’s not clinical reasoning. When people use ChatGPT as if it were a physician, they are mistaking fluent output for medical insight.
The Problem of Context
Medicine is built on context. Every patient brings a unique combination of age, genetics, geography, lifestyle, medications, and medical history that shapes their symptoms and risks. A clinician learns, through training and experience, what information matters and what doesn’t.
When someone without medical training asks an AI, “Why are my legs swollen?” they may not realize how much context is missing. The answer for a healthy 20-year-old is very different from that for an elderly patient with heart failure or kidney disease. The AI doesn’t know which person is asking, and the user doesn’t know which details matter. That absence of context turns a plausible answer into a potentially dangerous one.
The risk isn’t just that the AI might be “wrong” — it’s that the user has no way to recognize when it is.
The Illusion of Authority
Of all the risks, the illusion of authority may be the most dangerous. ChatGPT delivers its responses in confident, grammatically perfect prose. To the untrained reader, that confidence feels like competence. But beneath the polished sentences are probabilities, not certainties.
Ask the same medical question twice, and you may get two different answers. The system isn’t lying — it’s simply generating different statistically “reasonable” responses each time. Add to that the fact that ChatGPT is intentionally designed to sound polite, reassuring, and helpful. It’s built to please the user, not to challenge them. That means it rarely expresses uncertainty, even when uncertainty is the most responsible answer.
When an AI tool both flatters and informs, users walk away not just with information, but with misplaced confidence. That’s when well-intentioned curiosity becomes risk.
The Hidden Bias Beneath the Surface
Another problem lies deeper, in the training data itself. Each AI model is only as good as the information it was trained on — and those datasets differ widely among companies. No one outside the design teams truly knows what’s in them.
That makes AI a “black box” of bias. It may draw from limited or outdated data, or from sources that exclude certain populations, conditions, or treatments. Healthcare data, in particular, changes constantly — new studies, new drugs, new evidence emerge every day. No AI model can fully keep pace.
Most users, of course, assume the opposite: that an AI answer represents the latest, most complete, and most objective information available. But the reality is that every answer reflects invisible limitations. People are, in effect, flying blind — trusting a system they cannot audit.
Collaboration, Not Replacement
None of this means AI has no place in medicine. Quite the opposite — it can be a remarkable partner when used appropriately.
AI can help doctors and patients organize information, remember treatment plans, explain terms, or even generate question lists for upcoming visits. It can process years of wearable-device data or highlight trends that a human eye might miss. Used under medical supervision, AI can reduce errors of omission and extend a clinician’s cognitive reach.
The key is collaboration. AI should enhance, not replace, the human connection. The value of a physician is not only in the facts they recall but in the judgment, empathy, and accountability they bring to every decision. No algorithm, however advanced, can yet replicate that.
Guardrails and the Path Forward
There will be a future where AI safely helps patients more directly — but only after we build the systems to govern it. Unlike earlier technologies, AI is not linear; it is emergent. Its behavior changes as it learns and interacts, which makes traditional models of oversight and validation insufficient.
We need a new framework — one that blends clinical governance, data transparency, and continuous monitoring. Think of it as the equivalent of medical device regulation or pharmaceutical testing, but for algorithms. Until those structures exist, healthcare AI will remain ahead of our ability to manage it safely.
AI’s promise in medicine is enormous. But for now, caution is wisdom. The technology can inform, but it cannot yet care. And in healthcare, care — human, contextual, and accountable — will always matter most.