Skip to main content

Where AI in Medicine Falls Short

It can help with diagnosis but not yet with helping physicians and patients decide what to do with the information

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I met Peter and his family, as I so often do with patients, at what I can only assume was one of the worst points in their lives. As an infant, Peter (whose details I have changed enough to obscure his identity) had always been a bit delayed. He spoke in understandable words a bit later than expected, walked a bit later, and was always a bit of a clumsy child. His parents never really worried. After all, he was their fourth, and the subtle delays were likely just a result of getting less attention from them as they divided their time up among the children. And he was such a sweet little boy, the clumsiness always just seemed to add to his winsome character.

But by his fourth birthday it was hard to ignore that something was off, and by that point his pediatrician had grown concerned. In fact, things seemed to be getting worse. Diagnostic tests remained stubbornly inconclusive. The best that anyone could guess was that he had some unidentified form of a metabolic or neurodegenerative disorder.

Over the course of the next two years Peter was frequently hospitalized, often in intensive care. He developed progressive difficulty swallowing, leading to repeated choking, aspiration, and respiratory infections. A feeding tube was placed, but the frequency of hospitalizations continued unabated. With Peter’s needs increasing, the needs of both parents and siblings increasingly fell by the wayside.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Which is when I met them in my capacity as a pediatric palliative care physician. By this point his respiratory status had become so tenuous that the primary team caring for him was considering placing a tracheostomy with the hope that it would reduce the frequency of hospitalizations. Of course, no medical intervention is risk-free, and given Peter’s unclear but likely poor long-term prognosis, the team wanted my help in exploring goals of care with his family.

I explored with Peter’s parents what sort of a boy he was: what was a good day, what was a bad day? What was life like for the family, and how were they coping? What were they hoping for? What were they most afraid of? It quickly became apparent that one of the things that most bothered them was the lack of clarity around diagnosis. They just couldn’t shake the feeling that perhaps someone would finally identify the underlying problem, and that there would then be a possibility of a solution. Faced with his suffering but clinging to this hope, they had a very difficult time evaluating which interventions might make sense and which might feel like too much.

With this in mind, the team sent off Peter’s DNA for a full exome analysis, at that point a relatively new technology. There were no major revelations, just two subtle abnormalities not known to be associated with any known syndromes and which did not point towards any actual treatments.

But just having this information in hand helped Peter’s parents think more clearly about the future. As sad as they were, they expressed some relief at being able to set aside their hopes for a diagnosis and to explore more the “what ifs” that might unfold. Peter’s parents were able to clarify that if there was a reasonable chance that a tracheostomy would help reduce infections and hospitalizations, then that would be meaningful for them.

It was Peter’s story that came to mind when I read a recent article in Nature Medicine about new inroads in deploying artificial intelligence (AI) in pediatrics. In the article, researchers report their success in using AI to mine electronic health records as a diagnostic tool. As the authors point out, this is not just a matter of making clinicians’ lives easier: missed diagnoses and misdiagnoses occur with disturbing frequency, leading to increased morbidity and mortality and higher costs.

And, as they further point out, AI may prove to be especially useful when it comes to uncommon conditions. Peter’s story is a case in point, and one wonders how things might have unfolded had his family had more information earlier. Because of stories like Peter’s, I share the excitement that this report has generated. Any tool that results in faster and more accurate diagnosis is welcome news.

However, I must sound a note of caution. While AI may be helpful in diagnosis, unless a day comes when machines can fully replicate human thought and emotions, we should be wary of allowing AI to move beyond diagnosis and actually make medical management decisions for us. And this is not just speculation: the idea of AI engines taking over medical decision-making has been discussed in the scientific literature for decades already, as evidenced here, here and here, and has now entered a phase where researchers are actually testing models.

Some surely will argue that we should embrace such technology. Just as AI allows for the dispassionate crunching of countless data points to arrive at a precise diagnosis, it is not hard to imagine that the next step would be the same engine producing a carefully calculated evidence-based recommendation for treatment. Rather than rely on a fallible clinician to consider an intervention, an AI engine could swiftly calculate risks, evaluate evidence and spit out an order.

Viewing this from the perspective of pediatric palliative care, though incorporating AI seems very tempting, I worry that radical integration could be shortsighted. The key to navigating complex decisions with families involves careful examination of hopes, fears, values and goals. When I sit quietly with families like Peter’s, almost always more is conveyed in silence, glances and body language than in words.

Often there are tears; always emotions. The impact of a human touch—literally a hand pressed comfortingly on an arm—cannot be replicated and should not be underestimated. These are not cold calculations based on data, but rather real, nuanced decisions arrived at through careful and ongoing exploration of values.

At a time when the number of new interventions, treatments and technologies in medicine seems to be growing at an exponential rate; at a time when our ability to do things seems to have outstripped our understanding of their implications; at a time when we struggle to keep the personhood of patients and families at the center of medical care, to remove human beings from that process seems backwards and even dangerous. Taking emotion out of diagnostics as a way of reducing error may make great sense; taking emotion and humanity out of management and decision-making seems dangerous and ill-advised.

 Should AI be used to augment medical decision making? Absolutely, and we’re just beginning to explore the ways in which that might manifest itself. But replacing human beings is a separate issue altogether.

Peter’s story illustrates the importance of nuanced medical decision-making, how critical it is to take into account things that are difficult to quantify, such as hopes, fears and values. Ultimately, though he was able to get home after the tracheostomy placement, Peter’s condition continued to worsen. During a particularly prolonged admission, Peter’s parents decided that removing life-prolonging therapies was in fact most in line with their goals, and he subsequently died. A sad outcome, but a meaningful one for Peter and his family nonetheless, and one that could only be achieved with the essential, albeit at times messy and time-consuming, human touch that no AI engine could ever replace.