One of the challenges we face when using alternative metrics is the interpretation of what we measure. This is even more confusing than interpreting traditional citation impact (which is challenging and confusing in itself) because “altmetrics” is an umbrella term for a wide range of activities. One can’t compare an article being bookmarked with it being reviewed by an expert, even though they are both non-traditional metrics.

Taylor (2013) classified altmetric activity into five levels of engagement: There is the social activity level, that takes place in general social media sites and is short and rapid (e.g. “likes”), component (e.g. data) re-use, scholarly commentary (in science blogs, F1000 reviews, etc.), scholarly activity in academic platforms (e.g. Mendeley bookmarking) and mass media coverage.

In a new article “Party papers or policy discussions: an examination of highly shared papers using altmetric data,” Taylor and Plume (2014), used data from, which tracks four of the five categories (social activity, scholarly commentary, scholarly activity and mass media. I think there are at least three other articles derived from their data). They collected data from the API for four months, until January 17th, 2014. They collected 13,793 scholarly articles with at least one altmetric mention. Then, they looked at the 0.5% articles (69 overall) that received the most attention at the social activity category (tweets, likes, etc.). Only 8 of the 69 articles are full-fledged original research articles. The eight research articles have appealing titles (e.g. Climate change: vast costs of arctic change) and come from elite journals (e.g. Nature).

The other 61 documents are news items/features from prestigious journals, and when I say “prestigious journals” I mean “Nature.” Almost all of them are either Nature News or Nature News Feature items. After I scratched my head a few times, I tweeted to Mike Taylor, asking what happened to the rest of the journals. Euan Adie, founder of, saw the Twitter conversation between Mr. Taylor and me and solved the mystery: Science, for example, is a major source for research news but doesn’t give its news items Digital Object Identifiers, so can’t collect their data. Nature, however, does give their news item DOIs, hence their altmetric dominance. This is a good reminder that alternative metrics (and traditional ones as well) are only as good as their data.

When Taylor and Plume analyzed the other altmetric categories they found that only two articles appeared in all of them. One is a Nature article called “Cerebral organoids model human brain development and microcephaly,” which describes the creation of human brain models in cell culture and modeling brain disorders. The second is a PNAS article about predicting a person’s personal traits (e.g. sexual orientation, use of addictive substances) based on her or his Facebook “likes,” (“Private traits and attributes are predictable from digital records of human behavior”). The biggest overlap was between mass media coverage and scholarly commentary (31 articles out of the 69, see Figure 1). This fits with results from my research that showed that most NEJM articles covered by blog posts aggregated in were also covered by the New York Times, the news agency Reuters, or both.

Obvious limitations of the research are its short period of data collection (four months) and the relatively small number of articles. We’ve already talked about the biggest limitation – articles without DOI can’t be taken into account. In general, this isn’t a problem for research articles and reviews, because they usually have a DOI, but can hurt unpublished conference proceedings and other documents without DOI. The altmetric data collected and used in this study is skewed (normal for this kind of studies), with 15% of the articles receiving about 90% of the social activity. The authors suggested wisely that in the future, document types (e.g. editorial, review, research article) should be taken into account when analyzing altmetric data, which is how it’s done in traditional citation analysis. Overall, this is an interesting exercise in what altmetric data can and can’t do for us.


Taylor, M., & Plume, A. (2014). Party papers or policy discussions: an examination of highly shared papers using altmetric data Research Trends (36), 17-20

Taylor, M. (2013). Towards a common model of citation: some thoughts on merging altmetrics and bibliometrics Research Trends