Since the whole South Korea stem-cell fiasco broke, there has been a lot of discussion about ethics in science. The discussion is certainly necessary, but it has a certain deja vu quality to it. Every few years, a high-profile case of cheating comes to light and science goes through another round of soul-searching. People complain that co-authors are asleep at the wheel and mull proposals for mandatory ethics training. They rush to the defense of peer review, pointing out that it is not intended to catch outright fraud, and remind everyone that scientists are merely human, so there will always be some bad apples. Ultimately, researchers reassure themselves and the public that science's inherent self-corrective mechanisms will reassert themselves -- that mistakes, honest and otherwise, will eventually out.
That's all well and good. The trouble is that ethical lapses follow a power-law distribution: for every big one, there are several medium ones, and a lot of small ones. For every guy who holds people up at gunpoint, there are millions who go 60 in a 55 zone. Science is no different. The problems in biomedical research, where competition can be cutthroat and the dinging sound of corporate cash-registers is never far, have gotten some attention, but the pure sciences are hardly unaffected. I wrote about some of these issues a decade ago and have seen very little done since then to address them.
Do the small but daily compromises do as much damage to science, cumulatively, as the high-profile cases do? It seems entirely plausible. Precisely because these lapses are small, they seldom affect the final results, so they scoot right by science's self-corrective mechanisms. But that doesn't mean they don't distort science. I have seen people give up research projects or quit science when, for example, someone else appropriated their work. Moreover, scofflawing is tied up with broader issues that most people agree do distort science, such as publish-or-perish pressures and a nightmarish grant system that forces many researchers to spend more time getting money to do science than actually doing it.
The grant system, in particular, is ripe for reform. Competing for grants is, in general, a good thing. As unpleasant as it might be, grant-writing forces people to sharpen their ideas and clarify why they are doing what they are doing. Besides, what's the alternative? Keeping a lab funded would be even more of a time-consuming beauty contest if every researcher had to find and woo a rich patron.
But the benefits of competition come with an overhead cost. Apart from writing proposals, people are called on to sit on peer-review panels to judge them. These panels are often so flooded with proposals that out-of-the-box ideas, especially from lesser-known names, fall by the wayside. And researchers notoriously cut ethical corners in order to make sure they won't end up penniless. The tricks include writing grants for research you have, in effect, already done.
It beggars belief that the current balance of cost and benefit is optimal. I have no specific solutions; the problem is complicated. But I am troubled that few people are even talking about it. At the least, we need a serious dialogue about how to improve the grant system. Yet the professional societies in physics tell me they've never done a study of how much time people spend on grants. It is hard to know what to do about the system when so little is really known about its effects.
One helpful set of proposals came last fall from a National Academy of Sciences panel. Among other things, it argued for special grants to help young researchers jump-start their careers and for a chunk of money dedicated to "high-risk, high-payoff" research that peer-review panels tend to shunt aside. Such reforms wouldn't solve the problem of ethical lapses, but they would certainly be a step.
Update (January 16): The one ethical issue that does get some attention is gender discrimination. You can participate in the debate at Cosmic Variance.