A while back I covered a study called “From funding agencies to scientific agency,” by researchers from Indiana University’s Department of Information and Library Science (Bollen, Crandall, Junk, Ding & Börner, 2014) which suggested an alternative for today’s method of allocating research funds using peer review. The study suggested that each researcher would be given a fixed sum for her or his research and send a fixed percentage of it to other researchers that they believe is worth funding.
The authors then used citation networks as a proxy for funding decisions in their model (the amount each researcher would allocate would be in proportion to the times they cited other researchers during the 5 years before the simulation) and funding agencies data to see whether their predictions match actual grant allocation. Based on their results, they suggested that their model could be an alternative to the current grant peer review system, saving time and resources (grant proposals!).
Naturally, this suggestion is controversial. Shahar Avin, a PhD candidate from the Department of History and Philosophy of Science, Cambridge, wrote a reply to this proposal, in which he noted what he considered as serious flaws of the proposed mechanism:
1. Any funding system has to be effective (promote science) and reliable. Avin worries that the Bollen et al. system is neither, and will encourage bias. Researchers will be able to allocate funds according to factors other than scientific merit. When this bias would become apparent to the outside world, such as governments, it would have the potential to discredit the research community and cause cuts in research funding. We might as well allocate funds by lottery.
2. Even the current peer review system is sometimes accused of hindering new, cutting edge research. Avin is afraid that the proposed mechanism will end up funding mostly traditional rather than radical approaches, even more so than the current system.
3. The creation of supernodes – Avin notes that in network aggregation mechanisms, such as PageRank, supernodes, or scientists who receive very large amounts of funding, are likely to be created: “It will be hard to avoid the scientific consensus drifting towards theviews of “superstars”, which will in turn lower diversity, and ultimately may undermine the objectivity of scientific research.”
The study’s authors replied with a short letter, stating that “Avin is mistaken about a number of points.”
1. It’s true that a funding system has to be effective and reliable, but unfortunately the current system isn’t. The authors’ proposal is meant to correct the current less-than-ideal situation. The authors note that Google’s PageRank indeed ranks pages by merit, but does so without peer-reviewing every hyperlink.
2. The proposed mechanism protects innovators by giving them a fixed sum for research, and also allows anyone who thinks their work has merit to send them funding, rather than them depending on review committees.
3. While the authors admit the funding distribution will be skewed, they suggest a “redistribution factor” that will correct unequal funding to, say, match the present funding distribution.
The authors end their letter by calling for the conduction of realistic tests of their model “Rather than making philosophical deductions about what may or may not happen,” (ouch!).
On one hand, the Bollen et al’s model works, and if proven efficient and reliable in real-world experiments, has the potential to cut costs and save time. On the other hand, the allocation of the tax payers’ money (most grants are public money, one way or another) is a sensitive issue; not because the system would necessarily be unreliable, as Avin suggests, but because the public will probably see it as such. The public’s reluctance from giving large sums of tax payers’ money to individuals to do as they please (within the mechanism’s limit, of course) will be hard to overcome. There is also the matter of internationality. While we can cite whoever we want, should, say, an American researcher be allowed to allocate funds to a French researcher her work he thinks is worth funding, at the American tax payer’s expense?
Another challenge would be: how much to give? We cannot give biologists the same amounts we give philosophers, simply because biology requires much more funding. Should a physicist be allowed to send funding to a psychologist, for example?
While Bollen et al.’s model might be hard to put into practice, it represents an out-of-the-box thinking and readiness to challenge established practices, which I definitely appreciate
Avin, S. (2014). Why we still need grant peer review EMBO reports, 15 (5), 465-466 DOI: 10.1002/embr.201438671
Bollen, J., Crandall, D., Junk, D., Ding, Y., & Börner, K. (2014). From funding agencies to scientific agency. EMBO reports, 15(2), 131-133.
Bollen J, Crandall D, Junk D, Ding Y, & Börner K (2014). Response: "Why we still need grant peer review". EMBO reports, 15 (5) PMID: 24795463