In an earlier post, I described an ideal of the tribe of science that the focus of scientific discourse should be squarely on the content — the hypotheses scientists are working with, the empirical data they have amassed, the experimental strategies they have developed for getting more information about our world — rather than on the particular details of the people involved in this discourse. This ideal is what sociologist of science Robert K. Merton* described as the “norm of universalism”.

Ideals, being ideals, can be hard to live up to. Anonymous peer review of scientific journal articles notwithstanding, there are conversations in the tribe of science where it seems to matter a lot who is talking, not just what she’s saying about the science. Some scientists were trained by pioneers in their fields, or hired to work in prestigious and well-funded university departments. Some have published surprising results that have set in motion major changes in the scientific understanding of a particular phenomenon, or have won Nobel Prizes.

The rest can feel like anonymous members in a sea of scientists, doing the day to day labor of advancing our knowledge without benefit of any star power within the community. Indeed, probably lots of scientists prefer the task of making the knowledge, having no special need to have their names widely known within their fields and piled with accolades.

But there’s a peculiar consequence of the idea that scientists are all in the knowledge-buiding trenches together, focused on the common task rather than on self-agrandizement. When scientists are happily ensconced in the tribe of science, very few of them take themselves to be stars. But when the larger society, made up mostly of non-scientists, encounters a scientist — any scientist — that larger society might take him to be a star.

Merton touched on this issue when he described another norm of the tribe of science, disinterestedness. One way to think about the norm of disinterestedness is that scientists aren’t doing science primarily to get the big bucks, or fame, or attractive dates. Merton’s description of this community value is a bit more subtle. He notes that disinterestedness is different from altruism, and that scientists needn’t be saints.

The best way to understand disinterestedness might be to think of how a scientist working within her tribe is different from an expert out in the world dealing with laypeople. The expert, knowing more than the layperson, could exploit the layperson’s ignorance or his tendency to trust the judgment of the expert. The expert, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold.

The scientist working within the tribe of science can expect no such advantage. Thus, trying to put one over on other scientists is a strategy that shouldn’t get you far. By necessity, the knowledge claims you advance are going to be useful primarily in terms of what they add to the shared body of scientific knowledge, if only because your being accountable to the other scientists in the tribe means that there is no value added to the claims from using them to play your scientific peers for chumps.

Merton described situations in which the bona fides of the tribe of science were used in the service of non-scientific ends:

Science realizes its claims. However, its authority can be and is appropriated for interested purposes, precisely because the laity is often in no position to distinguish spurious from genuine claims to such authority. The presumably scientific pronouncements of totalitarian spokesmen on race or economy or history are for the uninstructed laity of the same order as newspaper reports of an expanding universe or wave mechanics. In both instances, they cannot be checked by the man-in-the-street and in both instances, they may run counter to common sense. If anything, the myths will seem more plausible and are certainly more comprehensible to the general public than accredited scientific theories, since they are closer to common-sense experience and to cultural bias. Partly as a result of scientific achievements, therefore, the population at large becomes susceptible to new mysticisms expressed in apparently scientific terms. The borrowed prestige of science bestows prestige on the unscientific doctrine. (p. 277))

(Bold emphasis added)

The success of science — the concentrated expertise of the tribe — means that those outside of it may take “scientific” claims at face value. Unable to make an independent evaluation of their credibility, lay people can easily fall prey to a wolf in scientist’s clothing, to a huckster assumed to be committed first and foremost to the facts (as scientists try to be) who is actually distorting them to look after his own ends.

This presents a serious challenge for non-scientists — and for scientists, too.

If the non-scientist can’t determine whether a purportedly scientific claim is a good one — whether, for example, it is supported by the empirical evidence — the non-scientist has to choose between accepting that claim on the authority of someone who claims to be a scientist (which in itself raises another evaluative problem for the non-scientist — what kind of credentials do you need to see from the guy wearing the lab coat to believe that he’s a proper scientist?), or setting aside all putative scientific claims and remaining agnostic about them. You trust that the “Science” label on a claim tells you something about its quality, or you recognize that it conveys even less useful information to you than a label that says, “Now with Jojoba!”

If late-night infomercials and commercial websites are any indication, there are not strong labeling laws covering what can be labeled as “Science”, at least in a sales pitch aimed at the public at large.** This leaves open the possibility that the claims made by the guy in the white lab coat that he’s saying are backed by Science would not be recognized by other scientists as backed by science.

The problem this presents for scientists is two-fold.

On the one hand, scientists are trying to get along in a larger society where some of what they discover in their day jobs (building knowledge) could end up being relevant to how that larger society makes decisions. If we want our governments to set sensible policy as far as tackling disease outbreaks, or building infrastructure that won’t crumble in floods, or ensuring that natural resources are utilized sustainably, it would be good for that policy to be informed by the best relevant knowledge we have on the subject. Policy makers, in other words, want to be able to rely on science — something that scientists want, too (since usually they are working as hard as they are to build the knowledge so that the knowledge can be put to good use). But that can be hard to do if some members of the tribe of science go rogue, trading on their scientific credibility to sell something as science that is not.

Even if policy makers have some reasonable way to tell the people slapping the Science label on claims that aren’t scientific, there will be problems in a democratic society where the public at large can’t reliably tell scientists from purveyors of snake-oil.

In such situations, the public at large may worry that anyone with scientific credentials could be playing them for suckers. Scientists who they don’t already know by reputation may be presumed to be looking out for their own interests rather than to be advancing scientific knowledge.

A public distrustful of scientists’ good intentions or trustworthiness in interactions with non-scientists will convey that distrust to the people making policy for them.

This means that scientists have a strong interest in identifying the members of the tribe of science who go rogue and try to abuse the public’s trust. People presenting themselves as scientists while selling unscientific claims are diluting the brand of Science. They undermine the reputation science has for building reliable knowledge. They undercut the claim other scientists make that, in their capacity as scientists, they hold themselves accountable to the way the world really is — to the facts, no matter how inconvenient they may be.

Indeed, if the tribe of science can’t make the case that it is serious about the task of building reliable knowledge about the world and using that knowledge to achieve good things for the public, the larger public may decide that putting up public monies to support scientific research is a bad idea. This, in turn, could lead to a world where most of the scientific knowledge is built with private money, by private industry — in which case, we might have to get most of our scientific knowledge from companies that actually are trying to sell us something.

*Robert K. Merton, “The Normative Structure of Science,” in The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press (1979), 267-278.

**There are, however, rules that require the sellers of certain kinds of products to state clearly when they are making claims that have not been evaluated by the Food and Drug administration.