In 1907,  Francis Galton asked participants at a county fair to estimate the weight of an ox. While most individuals guessed rather poorly, the median guess was within 1 percent of the ox’s true weight. It's not a mysterious finding, as some people guess high and others guess low. When the group gets large, these cancel out and leave behind a better guess. Despite the simplicity, the implications are profound. Humans have the potential to make better choices when doing so collectively.

In the last few decades experimental and theoretical work has shown that collective wisdom is by no means guaranteed. The simple model demonstrated by Galton breaks down as soon as individuals begin to interact and share information. Whether or not the crowd is wise or foolish depends on the type of information individuals have, and is very sensitive to the network structure on which they interact. Unlike in Galton’s experiment, guesses can spread. A few loud, wrong individuals can be amplified and dominate the decision-making process.

Will our social networks lead to collective wisdom? It’s a question that’s about as important as it is difficult to answer. Globally, the consequence of structures that promote poor decisions is difficult to overstate. We may fail to fix global warming, we may start a nuclear war, or we may damage our world in other ways that make it uninhabitable. At the same time, our understanding of how network structure impacts collective decision-making is largely limited to theory or small-scale experiments.

Despite our limited knowledge, the global social network is continually being altered in unpredictable ways by companies like Facebook and Twitter. There's no literature or ethical guidelines for data scientists at those companies to reference to avoid catastrophe when implementing what might seem like a minor change. In essence, our global social network is much like our biosphere. We don't fully understand it, but it's changing rapidly, and these changes could potentially lead to a global collapse.

In the 1980s similar features led Michael Soulé, now a professor emeritus of environmental studies at the University of California, Santa Cruz, to label conservation biology a crisis discipline. We need to start viewing collective behavior the same way. Scientists need to work diligently to identify the types of data science decisions that could lead to collapse. Models need to be tested, evaluated and refined to develop ways to structure our social networks that promote good decisions.

Unlike with conservation biology, it’s hard to know where to start. At the moment, privately held corporations completely control how we interact at large scales. While many worry about manipulation of these interactions for nefarious causes, it could be just as dangerous if they simply alter networks as a byproduct of optimizing ad revenue. It’s likewise completely unclear how susceptible these structures are to influence, such as the recent push by the Russian government to have Trump elected. We simply have no idea how technology is impacting our society, and it’s not clear if we will any time soon.

A major challenge to understanding these changes is that the choices made by tech companies are almost entirely unregulated, with no industry-wide ethical guidelines and zero transparency. No government or scientific body that I’m aware of has access to the network-altering decisions made in Silicon Valley. The best scientists can hope to do is guess at how these companies connect people, and use that to model human behavior. If our assumptions are wrong, our models will fail.

The secrecy at tech companies isn’t surprising, and there are very compelling financial reasons why it’s the norm. Secret algorithms give companies a competitive edge and they live and die by them. In a way, it’s the currency of Silicon Valley. Making these open to the public would drastically alter the market in unpredictable ways. Even allowing scientists to peer inside could only stand to damage a company if their findings led to regulation.

It’s hard not to draw parallels between these arguments against transparency and regulation and those put forth by the Trump administration in support of destroying climate data and extracting fossil fuels. In both cases, profits of companies and shareholders are preventing scientists from asking questions that have the potential to save millions of lives. If we want to fix global problems, we need to be able understand exactly what companies are doing, know how it shapes collective patterns, and issue legislation when necessary to promote effective decision-making.

Over the past century, scientists across many fields have fought and won the ability to collect, analyze, and publish data crucial to understanding humanity’s greatest problems. They’ve stood up to many well-funded industries such as tobacco, agriculture and fossil fuels. All of our scientific progress is currently threatened by an overly opaque and unregulated tech industry.

The road to safeguarding ourselves against collective folly is long, but it must start with transparency. Scientists, activists and politicians need to work together to open up these industries to the scientific community. Collective behavior is not only a crisis discipline, but is the key to making progress for all crisis disciplines. It may be the most important battle the scientific community fights. Even if conservation biologists, climate scientists, economists and political scientists devise a map towards a more sustainable future, we still need to be able to steer the ship.