ADVERTISEMENT
  About the SA Blog Network













Cross-Check

Cross-Check


Critical views of science in the news
Cross-Check Home

How College Labs Might Sow Seeds of Science’s Replication Crisis

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Could science’s replication crisis stem in part from how students are taught to perform experiments in college? That’s my suspicion after discussing the issue with students taking Introduction to Science Communication, a course I’m teaching for the first time at Stevens Institute of Technology.

Some professors might place too much emphasis on results rather than methods of students' laboratory assignments. Photo courtesy Wikimedia Commons, http://en.wikipedia.org/wiki/ File:StFX_Physical_Sciences_Lab.jpg

“Replication crisis” refers to the growing recognition that the bulk of peer-reviewed scientific claims are “false,” as statistician John Ioannidis bluntly puts it. Ioannidis has drawn attention to the problem in a series of papers, beginning with his 2005 blockbuster “Why Most Published Research Findings Are False.”

The crisis, which I flicked at in a November column, continues to generate headlines. My buddy George Johnson reflected on it in “New Truths That Only One Can See,” his terrific inaugural contribution to a new Science Times column, “Raw Data.” (Congrats to George and the New York Times on the column!)

“Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true,” George writes. “But that is getting harder all the time.” George and I also chewed on the issue in a new Bloggingheads.tv chat.

Last week I asked my 20 science communication students–almost all science or engineering majors–to read a recent Economist cover story, “Unreliable Research: Trouble in the Lab,” which we then discussed in class. My students were not exactly shocked by the Economist expose. Far from it. One mentioned that when doing laboratory work for science classes, he had seen classmates fudge experimental results to align them with professors’ expectations. As he spoke, others nodded their heads. When I asked if anyone else had witnessed such behavior, all but a couple of students raised a hand.

Two students, Anthony and Amira, elaborate on these problems in papers that they posted on our course blog. Both posts are worth reading in their entirety, but here are excerpts.

In “Exploring the Statistical Defamation Sweeping Across Science Through the Eyes of an Undergrad,” Anthony writes: “[I]n several of the classes students are expected to perform experiments in, data is not necessarily as important as the report itself. I have seen students essentially create data to fit into a range given by the professor regardless of what their data actually is. Change a number here, move a decimal point there and voila, you have desirable data. The motivation for these kinds of actions varies. The most obvious is the mentality that GPA is without a doubt the most important thing in college. Students with this kind of mindset will do just about anything to make sure that the letter they receive for the class is as high as possible… [Professors] may be at as much fault as their students. By not making it a priority to correct statistical errors and by pushing aside the importance of effective data collection and analysis, they pass these ideologies on to their students, many of whom eventually put them into practice. I have seen numerous cases where students, after struggling with an experiment for whatever reason, would simply be given a set of data points from their professor so they can move on with what had been planned for the class. By doing this, the professor implies that the data is not necessarily as important as the final result.”

In “Is Irreproducibility an Issue of Academic Negligence?” Amira writes: “There are two contributors to this fiasco, the laboratory instructor and the students.  Given an experiment to conduct, usually with ideal results easily predicted, most students’ goal is to get the ‘answer’ and leave.  With the experimental results rarely (if ever) holding true to the ideal, some students take it upon themselves to ‘correct’ the mistake by changing the data, rather than risk being told to repeat the experiment; they’re more concerned with the red number on the top corner of the page than with those in the excel spreadsheet boxes.”

Amira notes that some professors, fortunately, emphasize the process rather than final results of experiments. “[I]nstead of being graded on numerical results (given that the procedure was understood and followed),” she writes, “there is a focus on understanding the outcome of the experiment; if reasonable, then it is understood why, and if not, then there is an analysis of real world factors that may have contributed to experimental errors.”

Yes, that’s the way science should be taught. But why isn’t that method universal? The problem, Amira suggests, is the focus on “judging every individual based on the outcomes of exams, whether they are college entrance, graduate entrance, or occupational exams.  We’ve depleted inspiration from our work and made everything a competition. The pressure caused by these fixated evaluations drive scientists to one goal, produce and publish anything and everything just to stay in the public sphere, meaning to keep their job and position… [A] cultural transformation away from this uninformative means of evaluation will be the ultimate solution.”

Of course my students represent a minute sample, but I’ll wager that their experiences are not unusual. Ironically, listening to them and reading their papers left me in an oddly upbeat mood. Not to get all sentimental, but as long as science attracts intelligent, conscientious students like Anthony and Amira, there’s hope.

John Horgan About the Author: Every week, hockey-playing science writer John Horgan takes a puckish, provocative look at breaking science. A teacher at Stevens Institute of Technology, Horgan is the author of four books, including The End of Science (Addison Wesley, 1996) and The End of War (McSweeney's, 2012). Follow on Twitter @Horganism.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 8 Comments

Add Comment
  1. 1. Spironis 1:22 pm 01/27/2014

    Budget, Enviro-whinerism, and “rights” squeezed science until it popped. Chem lab shrank from grams to milligrams to TLC spots. Bio lab does dissections on a Mac. Pseudo-theorists’ survival is contingent upon statistically pleasing their managers. Content and its quality are unquantifiable and therefore administratively irrelevant.

    How many papers did you publish? How many citations did you get? Crank the the spreadsheet, then performance bonuses are rewarded. Funding drives theory drives observation in a risk-free business environment defined by PERT-charts. Do not go beyond or external to your funding. Empirical discovery is insubordination.

    Link to this
  2. 2. David Cummings 2:11 pm 01/27/2014

    Good post, John. Very interesting.

    Link to this
  3. 3. badger 3:46 pm 01/27/2014

    I’d be a bit wary of painting most undergraduate lab courses with such a broad brush.
    Based on reports of a lack of reproducibility of original research combined with a conversation with twenty students, the author tries to imply that undergraduate labs are perhaps the source for unreliable data… owing to pressures such as good grades and excess time and effort. Is the argument that these later manifest into pressures such as tenure, funding, and so on…?
    As long as we’re dealing with first-hand experiences, I would say that I didn’t really notice any such manipulation of data during my undergraduate or graduate-level labs. A lot of the professors stressed on the importance of not manipulating data, and I can remember a few lab reports of mine that reported inconclusive results… as opposed to what was expected in the experiment.

    Link to this
  4. 4. oldfartfox 5:17 pm 01/27/2014

    I once had a chemistry professor who was suspicious of any student’s lab work that did not have at least one or two outliers that were well off of the regression line, on the basis that the average student’s technique just wasn’t that damned good.

    Unfortunately, he appears to have been an outlier himself.

    Link to this
  5. 5. rshoff2 8:51 pm 01/27/2014

    You mean students are only interested in passing tests and getting a degree, then landing a prestigious job somewhere? You mean they come out as lemmings that can only hop on command, but don’t even question the purpose or why they are doing it? I’m sure I paint with a broad brush and generalizations are well, generally incorrect or unfair. But (a big But), in my life I’ve seen many people educated beyond their intelligence. It’s rampant!

    Link to this
  6. 6. Von Stupidtz 1:40 am 01/28/2014

    Unlike Kekule who had 17 years and Newton who had an entire vacation, students in the lab have only a few hours to perform the experiment. It would be unfair to put all the blame on the system. There is always a trade-off between exploring either the breadth or depth of a subject.

    Why blame the professors? Are students ready to spend extra hours trying to troubleshoot the reasons for faulty readings? Do they have enough time apart from their already busy schedule?

    On a different note, even the basic principles of science like The Scientific Procedure, different types of bias involved in taking measurements are not taught in schools. I bet half of the graduates don’t know what a double blind test is.

    On a totally irrelevant note, I don’t know how I ended up visiting your blog.

    Link to this
  7. 7. chaotic2 5:54 am 01/28/2014

    When I studied Physics, over 60 years ago, the emphasis was on accuracy, analysis of ‘errors’, explanation of ‘errors’ and integrity. Thus in the standard determination of g by a compound pendulum there was an electromagnet at the other side of the lab wall so that g could be varied by the Prof. Woe betide any student getting the book value of 9.81..
    When I returned to academia many years later I would get the class to try and replicate tables and graphs from well established textbooks or well cited papers. Then, when they failed, this opened new levels of thinking and discussion – a step toward developing the key attitudes and skills essential for their individual development as scientists.
    Universities should aim for education – a lifetime way of personal thinking. All too often they seem to respond to the increasing commercial finance and personal tenure pressures pressures by training – easy to specify, to examine and to apply in standard conditions. This is obviously useful for many tasks but seldom helps students as scientists – their continuing, lifetime, self development of their iconoclasm, critical analysis and self confidence to challenge the present and create new futures.
    p.s. When people create results they flinch from using certain numbers. Count the frequencies of the last digit in the results and then use statistics, even simple Student’s T test, to see how probable their reported results are. You will often be surprised.

    Link to this
  8. 8. rkipling 12:35 pm 01/29/2014

    chaotic2,

    Great story. Thanks.

    Sounds like ethics courses have been needed for some time. But, students would probably find a way to cheat in Ethics 101 as well.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X