With stunning rapidity, the novel coronavirus upended everyday life in every corner of the world. As the number of cases skyrocketed, hospitals were overwhelmed. Now, vast swaths of the economy remain hobbled, and the night air is eerily quiet. Few predicted either the magnitude or the extent of the disruption, and the response was almost uniformly insufficient.

Yet we can see in hindsight that indicators of vulnerability and other clues were everywhere. This phenomenon has a familiar feel. Warning signs for a succession of crises, ranging from the catastrophic events of 9/11 to the financial crisis of 2007–2009, stand out in stark relief when you look back at how the events unfolded.

Is it inevitable that disasters will continue to wreak global-scale havoc out of the blue? Perhaps not. Failures to predict catastrophes are invariably more dramatic and memorable than successes. But there are good examples of predictions that were heeded in time to largely solve a serious problem. The Y2K software issue, for example—which threatened to scramble electronic systems around the world as computer clocks ticked over from the year “99” to the year “00” and reset to 1900, not 2000—was a nonevent because it spurred concerted, coordinated effort to avert the problem. The hole in the ozone layer is the smallest it’s been since 1988, the year before the Montreal Protocol banned ozone-destroying chlorofluorocarbons. By definition, a successful intervention mitigates a disaster, which then gives the false impression there was never any disaster to prepare for. With mitigation, we never see the so-called counterfactual case.

Successful interventions like these rely, however, on having accurate pictures of future states of the world—the world with, and the world without, the intervention. The complexity of the real world makes creating such accurate pictures difficult. Nonetheless, we do have reliable methods for generating good predictions that can inform disaster preparedness as well as support real-time decision-making once a crisis has struck.

Accurate predictions tend to draw on several concepts. Of foremost importance, predictions must come in gradients. Binary yes/no prognostications devoid of nuance are rarely useful. Every prediction involves a range of probabilities, and if they can be relied upon, knowing the range is an extremely useful piece of information. Understanding, for example, the probability of a major pandemic in any given year allows public health and government officials to devote appropriate resources to planning for one.

When a number of different people make predictions about a possible event, and those predictions are aggregated, what emerges is a probability distribution function—a graph that shows the various event outcomes along with the probability the predictors assigned to each. This continuously updated graph, for example, which aggregates of hundreds individual predictions, shows the odds of a variety of possible totals for the number of worldwide COVID-19 infections. The median prediction is for 622 million cases, with a 25 percent chance of fewer than 210 million cases—and, grimly, a 25 percent chance that more than 1.6 billion cases will develop. Governments, health care systems, individuals and families all need information of this type in order to plan and respond effectively.

These predictions should not be left to pundits, analysts and the credentialed. Instead, when events have uncertain outcomes based on complex causes, fundamental research has shown that the incorporation of a range of perspectives is statistically most likely to be accurate. Moreover, in their book Superforecasting: The Art and Science of Prediction, Philip Tetlock and Dan Gardner argue convincingly that, like many mental capabilities, prediction is a talent that persists over time and is a skill that can be developed. When getting steady quantitative feedback and assessment, anyone with sufficient dedication and interest can improve their skill and accuracy and develop a quantified track record. Thus, probabilities of future events can be reliably estimated by optimally aggregating predictions, while counting more heavily those predictors with domain expertise and strong prediction track records.

A number of groups are developing tools and crowdsourcing predictions in order to get the best possible understanding of how the COVID-19 epidemic might unfold. Researchers at Johns Hopkins University have been running a collective disease forecasting effort for several years, which has now turned its focus to the novel coronavirus. A group of faculty and students at Carnegie Mellon University called Delphi develops technological capabilities for epidemiological forecasting and hopes to make this practice as common and reliable as weather forecasting is today. Tetlock and a number of collaborators have created Good Judgment Open, which crowdsources predictions on a variety of events of geopolitical importance and is hosting a challenge associated with the ongoing pandemic.

And, mirroring the exponential increase in the number of cases, Metaculus a four-year old prediction platform co-founded by two of us (Anthony Aguirre and Gregory Laughlin) and for which the third (Gaia Dempsey) is a consultant, that has rapidly aggregated a trustworthy network of thousands of predictors, has since late January hosted more than 18,000 new predictions on questions related to all aspects of COVID-19—the number of confirmed cases, the efficacy of various containment and treatment regimes and the social and economic impact around the world. The site has launched a dedicated COVID-19 prediction dashboard to aggregate the most important questions and their associated predictions to assist in time-critical decision-making at every level, from the individual to the public policy decision-maker. The track record for the platform is remarkably good, and, importantly is transparently available.

Anyone can use and join these platforms, and there is an especially great need for those in the public health sector and those who have data science expertise to weigh in. We all benefit from accurate information in the midst of this crisis, and we know that sharing our thought processes and sharing information results in higher-quality community predictions.

This is an opportunity to improve the way we coordinate information sharing while also safeguarding tomorrow for those whose well-being depends on having an accurate picture of the future. We cannot know with certainty how things will unfold, but with the best predictions, which focus on the probability of a full range of outcomes, we can face what’s next with a maximum of preparation and a maximum of resolve.

Read more about the coronavirus outbreak from Scientific American here, and read coverage from our international network of magazines here.