ADVERTISEMENT
  About the SA Blog Network













Doing Good Science

Doing Good Science


Building knowledge, training new scientists, sharing a world.
Doing Good Science Home

Credibility, bias, and the perils of having too much fun.


Email   PrintPrint



If you’re a regular reader of this blog (or, you know, attentive at all to the world around you), you will have noticed that scientific knowledge is built by human beings, creatures that, even on the job, resemble other humans more closely than they do Mr. Spock or his Vulcan conspecifics. When an experiment yields really informative results, most human scientists don’t cooly raise an eyebrow and murmur “Fascinating.” Instead, you’re likely to see a reactions somewhere on the continuum between big smiles, shouts of delight, and full-on end zone happy-dance. You can observe human scientists displaying similar emotional responses in other kinds of scientific situations, too — say, for example, when they find the fatal flaw in a competitor’s conclusion or experimental strategy.

Many scientists enjoy doing science. (If this weren’t so, the rest of us would have to feel pretty bad for making them do such thankless work to build knowledge that we’re not willing or able to build ourselves but from which we benefit nonetheless.) At least some scientists are enjoying more than just the careful work of forming hypotheses, making observations, comparing outcomes and predictions, and contributing to a more reliable account of the world and its workings. Sometimes the enjoyment comes from playing a particular kind of role in the scientific conversation.

Some scientists delight in the role of advancer or supporter of the new piece of knowledge that will change how we understand our world in some fundamental way. Other scientists delight in the role of curmudgeon, shooting down overly-bold claims. Some scientists relish being contrarians. Others find comfort in being upholders of consensus.

In light of this, we should probably consider whether having one of these human predilections like enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.

The basic problem is nothing new: what we observe, and how we interpret what we observe, can be influenced by what we expect to see — and, sometimes, by what we want to see. Obviously, scientists don’t always see what they want to see, else people’s grad school lab experiences would be deliriously happy rather than soul-crushingly frustrating. But sometimes what there is to see is ambiguous, and the person making the observation has to make a call. And frequently, with a finite set of data, there are multiple conclusions — not all of them compatible with each other — that can be drawn.

These are moments when our expectations and our ‘druthers might creep in as the tie-breaker.

At the scale of the larger community of science and the body of knowledge it produces, this may not be such a big deal. (As we’ve noted before, objectivity requires teamwork). Given a sufficiently diverse scientific community, there will be loads of other scientists who are likely to have different expectations and ‘druthers. In trying to take someone else’s result and use it to build more knowledge, the thought is that something like replication of the earlier result happens, and biases that may have colored the earlier result will be identified and corrected. (Especially since scientists are in competition for scarce goods like jobs, grants, and Nobel Prizes, you might start with the assumption that there’s no reason not to identify problems with the existing knowledge base. Of course, actual conditions on the ground for scientists can make things more complicated.)

But even given the rigorous assessment she can expect from the larger scientific community, each scientist would also like, individually, to be as unbiased as possible. One of the advantages of engaging with lots of other scientists, with different biases than your own, is you get better at noticing your own biases and keeping them on a shorter leash — putting you in a better place to make objective knowledge.

So, what if you discover that you take a lot of pleasure in being a naysayer or contrarian? Is coming to such self-awareness the kind of thing that should make you extra careful in coming to contrarian conclusions about the data? If you actually come to the awareness that you dig being a contrarian, does it put you in a better position to take corrective action than you would if you enjoyed being a contrarian but didn’t realize that being contrarian was what was bringing you the enjoyment?

(That’s right, a philosopher of science just made something like an argument that scientists might benefit — as scientists, not just as human beings — from self-reflection. Go figure.)

What kind of corrective action do I have in mind for scientists who discover that they may have a tilt, whether towards contrarianism or consensus-supporting? I’m thinking of a kind of scientific buddy-system, for example matching scientists with contrarian leanings to scientists who are made happier by consensus-supporting. Such a pairing would be useful for each scientist in the pair as far as vetting their evidence and conclusions: Here’s the scientist you have to convince! Here’s the colleague whose objections you need to understand and engage with before this goes any further!

After all, one of the things serious scientists are after is a good grip on how things actually are. An explanation that a scientist with different default assumptions than yours can’t easily dismiss is an explanation worth taking seriously. If, on the other hand, your “buddy” can dismiss your explanation, it would be good to know why so you can address its weaknesses (or even, if it is warranted, change your conclusions).

Such a buddy-system would probably only be workable with scientists who are serious about intellectual honesty and getting knowledge that is objective as possible. Among other things, this means you wouldn’t want to be paired with a scientist for whom having an open mind would be at odds with the conditions of his employment.

_____
An ancestor version of this post was published on my other blog.

Janet D. Stemwedel About the Author: Janet D. Stemwedel is an Associate Professor of Philosophy at San José State University. Her explorations of ethics, scientific knowledge-building, and how they are intertwined are informed by her misspent scientific youth as a physical chemist. Follow on Twitter @docfreeride.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 7 Comments

Add Comment
  1. 1. rkipling 5:01 pm 08/30/2013

    Interesting.

    Link to this
  2. 2. Vincentrj 7:56 pm 08/30/2013

    This article by Janet Stemwedel resonates with my own thoughts on this subject, and I think many of the points she makes are particularly relevant to the current controversy about the causes and the significance of the current global warming period.

    The fact seems incontrovertible that to be human is to be biased, and that applies to scientists too. However, the great strength of the scientific method is that the predictions of any theory on any subject, can be tested again and again by different people with different biases, and in different circumstances over a period of time.

    If such a theory is in fact false, or at least inaccurate, eventually we should discover that to be the case, provided the subject lends itself to this processes of falsification and verification. In other words, provided the subject being examined is not excessively complex with millions of interacting variables bordering on a state of chaos and producing a delayed effect which may not be apparent within a human lifespan.

    Link to this
  3. 3. rkipling 12:24 am 08/31/2013

    Vincentrj,

    You know, I had that same thought when I read the previous DGS post. When I read your comment here, another bias effect occurred to me. The good doctor talks about objectivity. We each believe that we are, of course, objective to a fault. By agreeing with Dr. S, we validate our own objectivity.

    Oh, and there is another similarity in our comments. Since both of us brought up AGW, we may be seeking to engage Dr. Stemwedel, an objectivist colleague (not necessarily the Rand brand), in that discussion. She would obviously agree with our own position. There is probably a name for this psychological manifestation.

    You lost me on that last paragraph. I have no idea what you are saying or how that ties into this topic. I’m not being critical. Just saying you lost me.

    Link to this
  4. 4. Janet D. Stemwedel in reply to Janet D. Stemwedel 12:32 am 08/31/2013

    For what it’s worth, when I hit “publish” on these posts, I am fully expecting commenters to tell me where I’m mistaken.

    So, I appreciate the agreement, but I don’t count on it, and I welcome the help people here give me identifying my own biases and bad assumptions.

    Link to this
  5. 5. rkipling 1:11 am 08/31/2013

    I’m absolutely confident of that. Your blog is one of the few I have bookmarked. I don’t mean to be overly complementary. After reading many blog posts influenced by political leanings, it’s encouraging to find an advocate for objectivity.

    What I thought was interesting is that Vincentrj and I reacted similarly to your writing. Seeing someone who is sometimes at odds with me write nearly the same thing prompted me to wonder if my objectivity is merely aspirational.

    I think I understand where you are coming from. Convincing people I actually do want their opinion has saved me from many a mistake.

    Link to this
  6. 6. Vincentrj 8:46 am 08/31/2013

    “You lost me on that last paragraph. I have no idea what you are saying or how that ties into this topic. I’m not being critical. Just saying you lost me.”

    rkipling,
    If it’s my last paragraph you are referring to, rather than Janet’s, I’ll attempt to clarify.

    The process of scientific verification and falsification, as I understand it, requires the existence of a practical procedure and a realistic model of the particular circumstance which are being examined.

    For example, one could demonstrate that CO2 is a greenhouse gas by creating a number of model greenhouses from materials which are equally transparent to all frequencies of heat radiation. One could then inject different amounts of CO2 into each model greenhouse, expose all the greenhouses to the same degree of radiation from the sun, and measure the differences in temperature inside each of the greenhouses.

    As I understand, the greenhouses with the higher levels of CO2, all else being equal, will become hotter than those with lower levels of CO2, as a result of particular characteristic of CO2 being less transparent to the lower frequencies of infrared heat being radiated in the opposite direction, from the greenhouse floor to the outside.

    However, when testing the effects of increased CO2 levels in such a manner, everything else should be the same, including the amount of water vapour in the greenhouses, because water vapour is also a greenhouse gas.

    The amount of radiation from the sun, that each greenhouse receives, also has to be the same. If a passing cloud blocks the radiation from reaching just one of the greenhouses for a short period of time, then that greenhouse ideally should be excluded from the experiment.

    Now, as I understand, the climate system of our planet is so complex that it is not possible to duplicate all the influences and variables inside a model greenhouse. Just one example of the sort of thing that happens in the real world is sometimes referred to as negative feed back. An increase in CO2 levels may cause a slight warming, which in turn causes a slight increase in evaporation of water.
    Since water vapour is also a greenhouse gas, one might think that such initial warming would lead to a runaway effect with accelerated warming, but our planet has a way of restoring the balance. Increased water vapour leads to increased cloud formation, and such increased cloud cover blocks some of the heat from the sun reaching the ground, thus counteracting the warming.

    In summary, if a realistic model cannot be created for verification and falsification purposes, then the stage is set for scientific opinions that reflect more strongly individual or corporate biases. Those scientists who are funded by oil and coal companies will search for climate studies and evidence that cast doubt on the significance of the role of CO2 in the current warming.

    Conversely, those scientists who work in government-funded climate research centres, understand quite well that the very existence of such centres, and the continuation of funding for such centres, is largely dependent upon a public perception that rising CO2 levels are a serious threat to humanity’s well-being. It therefore seems perfectly reasonable to me that such scientists will tend to be biased in favour of studies which highlight such a threat being directly attributable to CO2.

    However, I do not mean to imply there is a conspiracy or a fraud taking place (although there might be on occasions, as there is in all types of organisations), but rather there a natural bias taking place which is likely to continue until some definitive experiments can prove the case one way or the other.

    Link to this
  7. 7. rkipling 8:56 am 08/31/2013

    Oh okay.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American MIND iPad

Give a Gift & Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now >>

X

Email this Article

X