Skip to main content

COVID-19 Policy Must Take All Impacts into Account

Human health is obviously crucial, but epidemiological models should not ignore economic and ethical considerations

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


On 16 March, 2020, the Imperial College COVID-19 Response Team in London made public a report that provided forecasts of the impact of alternative nonpharmaceutical interventions (NPIs) intended to cope with the COVID-19 pandemic in high-income countries, with focus on the United Kingdom and the United States. The forecasts were made using a modified version of a simulation model previously developed to support pandemic influenza planning. The Response Team distinguished two broad policy alternatives which they described as follows:

“Two fundamental strategies are possible: (a) mitigation, which focuses on slowing but not necessarily stopping epidemic spread—reducing peak healthcare demand while protecting those most at risk of severe disease from infection, and (b) suppression, which aims to reverse epidemic growth, reducing case numbers to low levels and maintaining that situation indefinitely.”

Drawing implications from their forecasts, they recommended suppression as the preferred policy option.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Media coverage indicated that the Imperial College report immediately affected policy formation in the U.K. and the U.S., influencing both nations to shift sharply from mitigation strategies to suppression. Should this policy change have occurred? I would confidently say yes if there were reason to think that the Imperial College report provides a credible integrated assessment of the impacts of alternative policies—that is, an assessment of the full impacts on society of alternative policy options, taking economic and ethical factors, not just health, into account. Unfortunately, the report does not do that—and acknowledges this explicitly:

“We do not consider the ethical or economic implications of either strategy…. Instead we focus on feasibility, with a specific focus on what the likely healthcare system impact of the two approaches would be.”

Considering impacts on the health care system is obviously important. Nevertheless, it is difficult to understand how the response team can justify drawing policy conclusions based only on the effects on health care. From the beginning of the pandemic onward, the public has sought to learn the broad impacts of policy on social welfare, which minimally requires joint consideration of health care and the economy. 

While some believe that suppression is the best policy from both the health and economic perspectives, others argue the contrary. In the U.S., the potential tension between health and economic objectives has quickly become front page news. A headline in the New York Times of March 24 reads: “Trump Considers Reopening Economy, Over Health Experts’ Objections.” The article cites the views of a spectrum of economists.

Why didn’t the Imperial College Response Team perform an integrated assessment? The basic answer is that epidemiological modeling has, since its inception a century ago, mainly been performed by quantitative researchers with backgrounds in medicine and public health. Experts with these backgrounds have found it natural to focus on health concerns, viewing other aspects of social welfare as matters that may be important but which are beyond their purview. Thus, the team mentioned in passing that “Suppression … carries with it enormous social and economic costs which may themselves have significant impact on health and well-being in the short and longer-term.” Yet they made no attempt to quantify social and economic costs.

Moreover, there are two reasons to question the credibility of the forecasts that the report does offer. One is that the epidemiological model used by the team does not consider how a pandemic may generate behavioral responses within the population. The team acknowledges verbally that behavioral response may be an important determinant of outcomes, stating:

“the impact of many of the NPIs detailed here depends critically on how people respond to their introduction, which is highly likely to vary between countries and even communities. Last, it is highly likely that there would be significant spontaneous changes in population behaviour even in the absence of government-mandated interventions.”

This acknowledges that the dynamics of epidemics depend strongly on the decisions that individuals make to protect themselves from infection or ignore the danger. Nevertheless, the team did not model such responses. Instead, they invoked assumptions about the fractions of households who would comply with alternative policies, without providing justification. I should note that modeling and analysis of behavioral responses to epidemics has been a central concern of a separate literature on economic epidemiology, whose contributors are primarily health economists rather than researchers with backgrounds in medicine and public health.

The second reason for questioning the credibility of the forecasts is that, even within the traditional epidemiological focus on modeling disease transmission, there is limited basis to assess the accuracy of the models that have been developed and studied. I have persistently argued for forthright communication of uncertainty in the findings of research that aims to inform public policy. I became acutely aware of the failure of epidemiological modeling to communicate uncertainty some time ago when I sought to learn the state of the art in the field, as I prepared to write an exploratory article on the problem of formulating vaccination policy against infectious diseases. 

The underlying problem is the dearth of empirical evidence available to specify realistic epidemiological models and estimate their parameters. In our modern interconnected society, study of epidemics has been largely unable to perform the randomized trials that have been considered the so-called “gold standard” for medical research.

Modeling has necessarily relied on observational data, which are difficult to interpret. Attempting to cope with the absence of empirical evidence, epidemiologists have developed models that are sophisticated from mathematical and computational perspectives, but that have little empirical grounding. Unfortunately, authors have typically provided little information that would enable one to assess the accuracy of the assumptions they make about individual behavior, social interactions and disease transmission. To put it bluntly, they take their models too seriously.

I see lessons to be learned from research on climate policy. Climate research was at first a subject for earth scientists, who seek to forecast the impact of emissions on the atmosphere and oceans. Having backgrounds in the physical sciences, these researchers find it natural to focus on the physics of climate change rather than behavioral responses and social impacts. Over the past 30 years, however, the study of climate policy has broadened with the development of integrated assessment models, with major contributions by economists.

As a result, we now have a reasonably sophisticated perspective on how our planet and our social systems interact with one another. However, this improvement in perspective has so far been more qualitative than quantitative. Existing integrated assessment models that are used to inform climate policy make quantitative forecasts, but the credibility of the models is still limited. Climate researchers and epidemiologists alike should work to improve the credibility of their modeling.

Read more about the coronavirus outbreak here.