Each year, private and public organizations dedicate significant resources to research and development (R&D). The U.S. federal government alone sets aside billions for R&D in the federal budget, considering it to be a "vital investment in [the country's] future ." (1) Under the American Recovery and Reinvestment Act of 2009 (ARRA), more than $10 billion was dedicated to R&D and basic science activities. But, are these billions enough? Or are they too much?
In the face of limited financial resources, research and development activities can quickly become the first cut. Some of the reasons behind the choice to eliminate R&D funding stem from the difficulty that one encounters when trying to quantify the impacts of these programs. The combination of their relatively long timescales (many years or decades) and their indirect impacts (for examples, see the "10 NASA Inventions That You Might Use Every Day") create a challenge when one tries to establish a consistent methodology for quantifying the effectiveness of R&D programs. Further, attempts to compare R&D programs that are vastly different from a topic and impact area point of view can lead to philosophical discussions that further inhibit the quantitative process.
Despite of these challenges, three main quantitative measures are still frequently used to assess the "success" or impact of R&D investments, including.
1. Return on Investment (ROI)
2. Patent Counts
While each of these can give a sense of the effectiveness of R&D investments, the system that surrounds research and development projects makes it difficult to consistently apply these methods in a useful manner. It is therefore quite challenging to make links between investment and resulting gains.
Return on Investment (ROI)
The Return on Investment (ROI) measure tries to link R&D activities with profits or other monetary gains. If this link can be identified effectively, the ROI can provide a measure of investment vs. profits ($ vs. $). But, measuring the profits resulting from R&D investments is not a simple task, in particular in the public sector. Many factors inhibit the calculation of "returns," including the relatively long timescale associated with R&D projects compared to same-year capital projects, and the non-linear nature of the innovation process. Often, these inovations cannot be directly linked to a root R&D investment, which might have been funded several decades (and from several different pots) prior to the resulting profitable activity.
Further, the frequently complex pathway from lab to shelf can make it difficult to come to consensus on a methodology for adding together dollars to determine investments and profits. These problems increase in the public sector, where it becomes increasingly difficult to identify returns on investment (profits) - specifically, which returns should be included. A common example can be found in the nation's national labs where technology is frequently licensed to private entities for commercialization. In these scenarios, there can be much debate as to what profits should "count." Should returns be limited to the licensing fees, or should all profits by the private sector entity be included? In the 1970s and early 1980s, both the National Institute of Standards and Technologies (NIST) and NASA abandoned ROI calculation project in part because of serious methodological problems and disagreements.
Despite these limitations, many still attempt to use ROI to determine the "success" of their R&D programs. These attempts appear to generally be more successful in cases of private R&D, with shorter time scale projects (a few years). One older (1993) study for private (industrial) investment in research and development calculated that the annual ROI for these projects was between 20% and 30%.
The number of patents linked to an original R&D investment can be used as another indicator of the "success" of that investment. But, the number of patents issued varies greatly by topic area, making it difficult to compare the relative effectiveness of unrelated programs with any degree of granularity and, arguably, link to causation. Further, patent counts do not generally distinguish between those patents focused on minor system improvements vs. major leaps in innovation.
Bibliometrics, broadly speaking, refers to citation counts, where one determines the total number of publications (for example, scientific journal articles or conference papers) that are released by a particular research group or research program. These counts are then linked to funding sources, to determine the overall "success" of that funding based on its perceived impact in the field. Using bibliometrics, one can establish a quantitative measure that is arguably related to the impacts of R&D funding. But, as with patent counting, bibliometrics struggle to incorporate the value of these citations (for example, how much more/less is a journal paper "worth" compared to a conference paper? 2 times? 10 times?). In other words, it is difficult to interpret the level of research innovation and impact that has occurred as the result of a single R&D grant.
Option 4 - Consulting the "Experts"
In response to the limitations of using these three measures - ROI, patents and bibliometrics - to measure the "success" of R&D programs investments, some groups have turned to subject matter experts and peer review. For example, in 2009 the U.S. Department of Energy published a report on their Geothermal Technology Program that drew conclusions from the feedback they received from industry experts. Using a risk analysis process on their geothermal research, development, and demonstration (RD&D) portfolio, the authors gathered feedback from industry experts as well as national lab researchers and subcontractors to estimate RD&D impacts. Presumably, this study was conducted in order to assess the advisability of continued funding, and funding level, for this group.
But, even when consulting the "experts," the authors of this 2009 report encountered limitations. (2) Specifically, these types of modified peer review processes, due to their reliance on individual judgements, can become subject to bias resulting from the expertise and opinions of the "expert" themselves. And, because of the labor intensity of this process, peer review can become quite expensive, which can result in fewer individuals being consulted in order to tease out this bias in the system.
More Questions Than Answers
So, how can one measure the "success" of R&D investment? In the private sector, ROI might be a good option for shorter term projects. But, the bottom line appears to be that current methods aren't quite enough. Each has limitations, often rooting back to how to consistently apply methodologies across programs and sectors. As a result, one is frequently left with more questions than answers.
(1) Nemet, et al "U.S. Energy Research and Development: Declining Investment, Increasing Need, and the Feasibility of Expansion" available online here
(2) Young and Augustine "Report on the U.S> Geothermal Technologies Program's 2009 Risk Analysis" available online here
H/T to MG-H for his thoughtful comments in our discussion on this topic.