ADVERTISEMENT
Doing Good Science

Doing Good Science

Building knowledge, training new scientists, sharing a world.

Scientific credibility: is it who you are, or how you do it?

|

Part of the appeal of science is that it's a methodical quest for a reliable picture of how our world works. Creativity and insight is crucial at various junctures in this quest, but careful work and clear reasoning does much of the heavy lifting. Among other things, this means that the grade-schooler's ambition to be a scientist someday is significantly more attainable than the ambition to be a Grammy-winning recording artist, a pro-athlete, an astronaut, or the President of the United States.

Scientific methodology, rather than being a closely guarded trade secret, is a freely available resource.

Because of this, there is a sense that it doesn't matter too much who is using that scientific methodology. Rather, what matters is what scientists discover by way of the methodology.

This view is at the heart of the norm of universalism, one of the four norms of science described by sociologist Robert K. Merton in 1942*. As a sociologist, Merton studied science as a social group made up of its practitioners (what I call "the tribe of science"). The norms he described were the shared values of this social group, for which he found evidence in their practices -- the things that scientists in the tribe recognized that scientists ought to do, even if the actual behavior of particular scientists sometimes fall short of these oughts.

Universalism is the idea that the important issue for scientists is the content of claims about the world (or about the phenomena being studied), not the particulars about the people making those claims.

In other words, the tribe of science is committed to investigating knowledge claims made by graduate students as well as those made by Nobel Prize winners, those made by scientists at small colleges as well as those made at famous universities with huge endowments and buckets of grant money, those made by scientists in other countries as well as those made by scientists in one's own country.

Since the shared goal is building a reliable body of knowledge about the world we share, all the scientists engaged in that project are to be treated as capable to contribute. Disregarding another scientist's report because of who he is, then, is a breach of the norm of universalism.

We shouldn't assume that embracing the norm of universalism means that scientists think they ought uncritically to accept as credible any claim put forward by a member of the tribe of science. Indeed, there is another scientific norm, organized skepticism, that serves as a counterbalance to universalism. Everyone in the tribe of science can advance knowledge claims, but every such claims that is advanced is scrutinized, tested, tortured to see if it really holds up. The claims that do survive the skeptical scrutiny of the tribe get to take their place in the shared body of scientific knowledge. Presumably, between universalism and organized skepticism, members of the tribe of science understand that any member of the tribe of science might be a legitimate source of information that counts against someone else's knowledge claim.

Norms are ideals -- the standards up to which the tribe of science would like to live. In the real world, living up to ideals can be difficult.

Scientists (and other consumers of scientific claims) do take into account the identities of the scientists putting forward scientific claims when they assess the credibility of those claims. They are sometimes influenced by the quality of a scientist's published work to date. Someone with a track record of solid work may inspire more confidence than someone with a track record of shoddy work (especially if it is work that has required corrections or retractions). They may also be influenced by the scientist's professional pedigree: was she trained by a PI known for effectively mentoring trainees, or at a university recognized as doing "the best" work in a particular discipline, or by a PI known to take on more advisees than he could possibly mentor, or at a university without adequate resources to support cutting edge science?

Indeed, there are contexts in which these details are explicitly evaluated by scientists -- for example, when people are applying for grant money with which to launch a research project. Here, your educational pedigree, your access to personnel and facilities with reputations for excellence, and your track record of publications are taken as important in predicting the likelihood that you will be able to succeed in carrying out a proposed piece of research and thereby generating credible scientific claims.

None of this is to say that scientists think it's OK to assume that a Nobel Prize winner, or a Harvard professor, or a scientist with a decade or two of solid results under her belt could not be wrong, nor that her scientific claims should be exempt from careful scrutiny by other scientists. But, scientists -- like the rest of us -- seem to find it practical to make judgments of credibility that take into account various clues about how reliable someone has been in the past (and of how much scrutiny from other sources he is likely to have encountered and survived). The alternative -- full scale firsthand scrutiny of every knowledge claim made by anyone in the tribe of science before treating it as provisionally credible -- would leave little time for anything else, including generating scientific data and knowledge claims of one's own.

Still, this pragmatic approach to credibility is not without costs.

It can make it harder for scientists from smaller institutions, or who trained under less powerful or famous advisors, to get attention for their results in the ongoing discourse within their scientific field. It can also mark certain portions of the scientific literature as "more important" and others as "less important" when practitioners in a scientific field try to keep up with the literature. (The anointing of what some observers describe as ScienceGlamourMags -- the ones where the really important results get published -- doesn't just push other scientific journals further down on the scientist's to-read list, but also creates some fierce competition to get published in the ScienceGlamourMags, competitions that are sometimes decided on other bases than the solidity of the evidence backing up the scientific claims, and which result in a significant number of rejected manuscripts that may communicate robust and important results.)

And, a focus on who makes the science, or on where the science is made, rather than on how scientific knowledge is built, may make evaluation of science really tricky when scientists have conversations with people from outside the tribe of science. I'll take up this issue in my next post.

_____

*Robert K. Merton, "The Normative Structure of Science," in The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press (1979), 267-278.

The views expressed are those of the author and are not necessarily those of Scientific American.

Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.

Email this Article

X