Skip to main content

Finding the Right Confidence Interval

"Stick to your guns." "Put your nickel down." "Stand your ground." If you're a medical student, there is an excellent chance you have heard one of these in the course of your training.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


“Stick to your guns.” “Put your nickel down.” “Stand your ground.” If you’re a medical student, there is an excellent chance you have heard one of these in the course of your training.

Confidence is an entrenched element of medical culture. Say what you will about TV representations of medical training, but one thing Scrubs captures extremely well is the premium placed on confidence. When Elliot announces to her patient (who is also a physician) that she is making the calls, he is so impressed by her “cojones” that he hires her on the spot. A central theme in many episodes revolves around residents having an opinion different from the attending’s, where neither budge but instead engage in a battle of wills, even to the point of betting on outcomes.

It’s spot-on; from Day 1, we’re both overtly and implicitly inculcated with the message that to progress as a medical trainee is to be certain in our opinions, assertive in voicing them, and tenacious in sticking by them. We are told, across many institutions, to “fake it until we make it.” We are advised to state our plans with sureness – whether we believe we are right or not. Speak with any medical student, at any university, and you’ll find a large number who have received feedback to act more confident – even when they are wrong or unsure.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Confidence is a nice idea, and there’s certainly a spectrum. However, overconfidence is a real problem. Have we tipped too far on the scale?

I’m currently taking a very interesting seminar course, led by Jerome Groopman and Pamela Hartzband, on cognitive biases in medicine. On our first day, we touched on Nobel Prize winning psychologist Daniel Kahneman’s “thinking fast” versus “thinking slow” paradigms and how relying on the former in medicine can predispose us to cognitive biases that can obscure correct diagnoses. Some of these biases include anchoring, premature close, and confirmation bias. Our discussion was grounded with real medical cases, and we discussed how each bias played a role in how the case unfolded.

The cases were complex, and anyone could have missed the diagnoses. But what struck me throughout our conversation was how many of the biases seemed to be linked with a mentality that embraces confidences and rebuffs uncertainty. The doctors “stuck to their guns,” and the result was an environment prone to cognitive biases that led to errors.

There has been a lot of work done on the relationship between confidence and competence. One especially compelling summary was this New York Times piece by Kahneman in 2011, adapted from his book. Amazingly, a bulk of research shows almost no correlation between confidence predicting ability. “In general,” Kahneman wrote, “…you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.” A more recent piece in Fortune supported this, noting that if we predicted competence based on confidence, we would be only 15% more accurate than if we blindly guessed.

If it doesn’t matter either way, what’s wrong with being around a confident person?

There are in fact real downsides – and it’s that everyone else buys into the illusion. And when we believe someone, we are less likely to disagree, to question opinions, or to propose different ways of thinking – even if the person is wrong.

This is demonstrated nicely in a phenomenon called group polarization. When individuals in groups deliberate together, they tend to make decisions more extreme than each individual’s initial tendency. A prominent example is jury deliberations, where after the group debriefs, the decision is either more harsh or more forgiving compared to initial individual juror beliefs.

Meanwhile, when individuals deliberate alone, the average of their beliefs tends to be less extreme. There’s a famous story of Francis Galton observing this, to his surprise, at a county fair. Visitors were asked to guess the weight of a slaughtered ox, and the average estimate from individual crowd guesses was a remarkable 1 pound off from the true value – closer than any of the individual estimates made by cattle experts, and closer than most individual estimates from crowd members.

One customary hypothesis for why outcomes are so different when groups deliberate together versus individually has to do with dominant personalities holding disproportionate influence during group discussion, swaying the outcome, while others (both consciously and subconsciously) want to fit in.

How much of medical problem-solving follows the same pattern?

Medicine is fundamentally a team sport, and we often mull over problems together. That setup – invaluable from a teaching and learning perspective – also makes it vulnerable for these effects to take shape. Conversations on rounds can become reduced to the most confident person exerting opinions that others do not challenge. In medicine, especially, our teams are very hierarchical. Those lower in the totem pole often accede to the opinions of those above them, either too intimidated to speak up or shot down when doing so.

Basically, we work more like jurors than visitors at Galton’s county fair.

The impact may be particularly pronounced in medicine because it’s a field rife with uncertainty. It is well accepted that medical decision-making is both an art and a science, in which two doctors can look at the same situation, recommend different courses of action, and both be right. William Osler recognized it at the turn of the twentieth century, saying, “Medicine is a science of uncertainty and an art of probability.” In the past few decades, there has been a shift toward evidence-based medicine, emphasizing the up-to-date application of research findings to medical settings. Even so, decisions are often probabilistic, judgment calls frequent, and the best course of action unclear.

Group dynamics, hierarchy, and uncertainty? It’s a perfect storm for cognitive biases stemming from overconfidence.

And in fact, the research has supported this. A lack of correlation between confidence and competence has been demonstrated in medicine, just as it has been shown outside of it. One JAMA study presented doctors with simple and complex cases. Although there was much lower diagnostic accuracy for the complex cases, the doctors' confidence remained nearly as high as it was for the simple cases. Moreover, more confidence was linked with fewer requests for additional diagnostic tests. And despite the difficulty of the cases, the doctors did not request more second opinions or referrals. The authors concluded that “mismatch” between confidence and diagnostic accuracy “might prevent physicians from reexamining difficult cases where their diagnosis may be incorrect.” Prior research has shown a similar mismatch between confidence and treatment decisions in cancer and critical care, along with a mismatch between dermatologists’ confidence and correctness in recognizing melanoma.

So why do we work so hard to encourage “sticking to your guns,” as an ingrained component of medical culture?

Perhaps one reason is that it’s not just medicine. There’s a larger societal expectation that we should have opinions on everything and cling to them fiercely. Just look at politics. We denigrate those who change their stances as flip-floppers, rather than praising the ability to update a stance in the face of new information.

In the medical world, specifically, I think some of the emphasis on confidence has to do with how we feel we should act in front of patients. There’s value in coming off as confident in front of patients so that trust can be built and an alliance developed. I’ve written before about feigning more confidence than we have while performing procedures at which we are still novices, and the reason is so we don’t needlessly frighten people. We want patients to feel secure.

Even so, there are many instances in which our patient interactions could benefit from less confidence. We know, for example, that doctors interrupt patients on average a mere 12 to 18 seconds after they begin speaking. Anecdotally, overly dominant personalities who dismiss alternative perspectives on rounds also tend to be the ones who enter patients’ rooms with proclamations, who interrupt more quickly, and who spend more time talking than listening. None of that makes for good care. I’ll never forget the words from one patient, whose diagnosis was delayed for years thanks in part to a physician who dismissed an unfamiliar lab abnormality: “I see it as a sign of strength to ask for help when you need it. No one should look down on that.” When it comes to medical decision-making, I think many patients appreciate our acknowledging fallibility more than super-confident doctors want to think.

Others might argue that there’s an important internal benefit of confidence, captured well by the Amy Cuddy Ted talk we were shown during my orientation to third year of medical school. Don’t fake it until you make it, the prominent psychologist advises; fake it until you become it. She speaks persuasively about using power poses and the strength of believing to actually become the people we are hoping to be. The learning curve in medicine is huge, and we often have to act with incomplete information. Eventually, we will be completely accountable, and we need to feel comfortable making calls. One could argue that pushing confidence simply pushes trainees to do that. Here I’d agree that’s a valuable goal.

But that’s not exactly what happens in practice. There’s a tricky balance to be had: displaying confidence enough to assure patients (and ourselves), while also embracing uncertainty enough to remain open to new ideas.

The practice of medicine could benefit from more humility. We embrace evidence to guide our decision-making in clinical problems; so why not apply what we know about human psychology? It’s time to erase the stigma around uncertainty, and we need to stop the premium on excessive confidence as an end goal in itself. Sometimes, being less sure means being more open to seeking out alternative thinking. It means adjusting the hypothesis to fit the data, rather than the other way around. It means truly listening, rather than declaring.

Everyone has a story where the third year medical student clinched the diagnosis, or the nurse found the critical exam finding, or how simply listening to the patient’s history revealed the answer. Things like this can only happen, though, in cultures where uncertainty is recognized and an honest exchange of ideas encouraged.

I look forward to a time where the “hidden curriculum” of medical training evolves so that feeling a healthy level of confidence while embracing uncertainty are not at odds, but welcomed as complementary tools toward figuring out what is really going on. I look forward to practicing medicine while acknowledging uncertainty – and admitting it.

Ilana Yurkiewicz, M.D., is a physician at Stanford University and a medical journalist. She is a former Scientific American Blog Network columnist and AAAS Mass Media Fellow. Her writing has also appeared in Aeon Magazine, Health Affairs, and STAT News, and has been featured in "The Best American Science and Nature Writing.

More by Ilana Yurkiewicz