Skip to main content

Are We Measuring Research Success Wrong?

Universities tout how much they spend, but it’s no virtue if that spending is inefficient

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Academic researchers are, for the most part, competitive. These intellectual gladiators like to succeed—but more than that, they like to win. Historically this “winning” was determined by solving problems no one else has ever solved before, thereby driving a particular scientific discipline forward. Quantifying such success was challenging, subtle and nuanced, and, except in the rare cases of genuine breakthroughs, could really only be appreciated by others breathing the rarefied air high in the ivory tower.

Recently, however, many universities have been overrun by administrators without sufficient academic qualifications to obtain tenure in their own disciplines. These administrators needed some relatively simple way to determine which academic researchers were winning. The metric that has gained traction among such administrators is “research expenditures.” As a metric, “research expenditures” enables administrators to compare individual faculty members on what appears to be a level playing field. It also boils down the research efforts of an entire university to a single number to be used for simpleminded ranking. Perfect! Or is it?

Many faculty members already know there is a price to pay for this type of success. The more grants you win, the more time you have to spend administering the grant: managing budgets, writing reports and meeting with grant administrators. This reduces the time and effort you can put into research. What if the collective effect of focusing on research expenditures actually is slowing science down?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Consider for example the data from the National Science Foundation, which shows U.S. university rankings by total R&D expenditures. This data can be combined with easily accessible information in Scopus about the number of papers published by a given university in a given year and their citations from these manuscripts for a given year. Then, by dividing the R&D expenditures by the number of publications, the average cost per publication for each institution can be calculated. This would be a direct measure of the “efficiency of research spending.”

In order to gauge how useful that research is, however, we need to know if other researchers are using it. This can be extracted by observing the citations. Thus, similarly the dollars spent per citation can be calculated for a given year and be used as a proxy for the “usefulness of research spending.” Using 2016 data, the dollars spent per publication and the dollars spent (in 2016) per citation in 2017 (to provide some time for citations to accrue) were calculated for the 20 universities that “spent” the most money on R&D. Some interesting results about the economic efficiency of these top 20 universities is shown in Table 1 and allow us to answer two questions.

Do research expenditures correlate with efficiency of research spending?

As can be seen in Table 1, the answer appears to be “no.” The university with the highest expenditures (John Hopkins University) also provides the worst bargain on a dollar per publication basis with over $348,000 being spent to achieve a single publication. The most efficient of the top 20 expenditure universities is 20th on the list: Columbia University, with the relative bargain of about $100,000 per publication. However, publishing something does not necessarily mean that it is useful.

Do research expenditures correlate with research utility?

Again, the answer appears to be “no” based on this preliminary evaluation. On the usefulness of research spending, John Hopkins University again comes in last out of the top 20 R&D expenditure universities, because each citation in 2017 needed over $69,000 in research expenditures for 2016. Stanford University and MIT were the two most efficient in this group costing only about $17,000 and $18,000 per citation, respectively.

A quick spot-check of some random universities further down on the NSF’s list shows that it is quite possible to obtain much better scores than the top 20 R&D expenditure universities.

Limitations

Smart critics will point out numerous deficiencies with these metrics of efficiency of research spending and the usefulness of research spending. First for the former, some research inherently costs more and takes longer to produce (e.g., long-term medical studies), which may in part explain John Hopkins University’s lackluster performance. In addition, some disciplines favor numerous short rapid review and publication cycles (e.g., engineering) while others favor long review, publication and higher word count papers or even books (e.g., history). This would explain why schools focused on engineering research tended to do better by this metric. Second for the latter metric of usefulness, citation rates differ by field and discipline, citations can be driven by fads or negative citations (e.g., citations for messing up or getting something wrong) and good research can lack citations if it is “before it’s time.”

In addition, the way they were calculated is also open to error (e.g., if the R&D expenditures or output for a given institution changed radically in the time range studied (2016–7) the values could be distorted). Lastly, both of these metrics are open to manipulation (e.g., driving up the rate of paper publication and self-citations) similarly to how current research expenditure values are highly dubious because of inflated overhead rates.

Should research expenditures be used as a metric at all?

With accountants and research staff at every university and government funding agency dutifully crunching these numbers, easily millions of dollars are diverted from real science simply to track the metric alone. Does this make sense?

There are clear cases where the metric fails. George Justice, a professor of English, has pointed out that this metric does not work for the humanities. It also does not make any sense for the arts nor for many of the computational, modeling or theoretical subdisciplines (e.g., theoretical physics). Diminutive research expenditures in these areas do not reflect the value of their work to their fields nor to society. The theoreticians in the physical sciences, however, must be backed up by experiments where the large research expenditures occur. Even in these cases, however, the metric is found lacking with even a preliminary logic audit and review of available data.

It is intuitively obvious that simply because research can garner funding it is not necessarily the best, most ethical or even the most useful (think of the billions of dollars spent on erectile dysfunction R&D and government subsidies of the resultant products). However, as with any other university metric reported that may influence university revenue, measuring and ranking research expenditures encourages driving them up arbitrarily. This reduces the economic efficiency in research and thus be counterproductive to the goals of science and society. For researchers this is perhaps most clear in the absurd and increasingly onerous overhead rates that universities charge researchers and disingenuously count for research expenditures.

Unfortunately, overhead rates are primarily used to subsidize bloated administrative numbers as well as salaries and building depreciation, neither of which directly benefit research (and political scientist Benjamin Ginsberg and others have convincingly argued they are directly counterproductive to the mission of the universities as a whole). However, even ignoring these issues, the preliminary evidence above shows that the use of research expenditures as a metric is directly counterproductive.

Even with all of the limitations of this short inquiry taken into account, it is clear that the use of research expenditures as a metric for determining the quality of research is flawed. It is an input, and what is of value and what should be measured carefully for each university, discipline and researcher is the output from these expenditures. Thus, it is clear that it is time to move research expenditures to the denominator in university metrics.

In general, researchers are frugal with their hard-earned research funds, but if one of the primary metrics of success is spending, then investments that stretch research funding are discouraged. Unfortunately, using research expenditures as a proxy for academic output is simplistic in the best cases and has become counterproductive to the scientific enterprise.

Joshua M. Pearce is a professor in the departments of Materials Science and Engineering and Electrical and Computer Engineering at Michigan Technological University, and also has an appointment with the Department of Electronics and Nanoengineering at Aalto University in Espoo, Finland.

More by Joshua M. Pearce