Skip to main content

Thoughts about altmetrics (an unorganized, overdue post)

I  haven’t written about altmetrics so far. Not because it’s not a worthwhile subject, but because there’s so much I don’t know where to begin.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I haven't written about altmetrics so far. Not because it’s not a worthwhile subject, but because there’s so much I don’t know where to begin.

The term “altmetrics” was first suggested in a Tweet by Jason Priem, a PhD student from The University of North Carolina, Chapel Hill and co-founder, with Heather Piwowar, of Impact Story (disclaimer: Jason is an occasional co-author). It is short for “alternative metrics,” though these metrics are better described as “complementary metrics” or “additional metrics.” So, altmetric is the latest buzzword in a long line of something-metrics terms for Web-based indices.

Three years ago Jason Priem, Dario Taraborelli, Paul Groth and Cameron Neylon authored the Altmetric Manifesto (2010). Its first line is a painful truth: “No one can read everything.” They wrote about the ways our traditional filters – peer-reviewed journals – are lacking. They listed problems with peer-review, existing metrics like the h-index, and the Journal Impact Factor’s many flaws as factors in the decision to look for new indices. What kind of new indices? Well, mostly social-media based ones. Everything from Twitter to Instagram can be a potential source for altmetric data, as long as there’s scholarly material there. Of course that to me research blogs seem the most promising source, but you know I’m biased. Less than three years later, the sub-field is practically exploding with altmetric articles. It’s interesting to note most of them, when trying to prove a source is valid, show its correlation with citations. Citations are still the standard for impact.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The Public Library of Science (PLoS) journals are pioneers among journals in presenting both traditional and alternative metrics for each article. The metrics page for a PLoS article shows everything from usage metrics to Mendeley bookmarks to journal citations. Nature has recently adopted the altmetric “donut” from the commercial firm altmetric.com, which collects data from various altmetric sources and presents them as, well, donuts.

Donuts tend to be pale blue because that is the color which represents Twitter, and there are many more tweets than, say, blog posts. The dark blue is for Facebook, yellow for blogs, etc. Unfortunately, Nature using the altmetric donut in every article has ruined my “altmetric” Google alert and now I keep getting regular Nature articles instead of relevant altmetric ones.

Holbrook, Barr and Brown (2013) wrote a short letter to Nature where they summed up many possibilities for alternative metrics; everything from “Provoking lawsuits” through “Textbook authored” to “Coining of a phrase." I’m not sure about all the metrics in the table – I doubt the authors are – but they represent fascinating possibilities.

Table 1: Other possible indicators of impact

Public engagementAcademic communityMedia
Protests, demonstrations or arrestsInvitations to present, consult or reviewArticle downloads
Provoking lawsuitsInterdisciplinary achievementsWebsite hits
Angry letters from important peopleAdviser appointmentsMedia mentions
Meetings with important peopleReputation of close collaboratorsQuotes in media
Participation in public educationReputation as a team memberCoining of a phrase
Mention by policy-makersTextbooks authoredTrending in social media
Public research discussionsCitation in testimonials and surveysBlog mentions
MuckrakingAudience size at talks and meetingsBook sales
Quotes in policy documentsDeveloping a useful metricBuzzword invention
Rabble rousingCurriculum inputSocial-network contacts
Engagement with citizens abroadFaculty recommendations, prizesTelevision and radio interviews

New metrics come with new challenges. One challenge is the reliability of data. Take the problem of name variations; It has have always been one of the banes of the bibliometricians’ existence, and it followed us to alternative metrics. That, of course, is why you should have an ORCID number. An ORCID number to researchers is what the DOI is for articles. No matter the name variation, your number will always stay the same, so all your products will be accredited to you and you’ll have the bibliometricians’ eternal gratitude as well. The new tools for aggregating altmetric data are sometimes less than reliable, and each shows different results(!). Stefanie Haustein, my co-author in two conference articles, showed at the end of her presentation of the second article, and you can see the results for yourself:

Different metrics (taken from Haustein et al.'s presentation)

As Stefanie says, these tools are black boxes.

Another problem is the potential gaming of alternative metrics. Not that citations aren’t “gamed” regularly, but it takes a bit more effort, and you can’t buy citations (perhaps barter them, but usually not buy outright). On the other hand, one can buy today tweets, Facebook “likes” and Mendeley bookmarking. Once publishers, editors and researchers will take notice of alternative metrics an arms race will begin, and it’ll be much harder for bibliomericians to catch fraud, since there are so many data sources.

What do alternative metrics actually tell us? In my opinion, that is the biggest challenge. There are many theories about the meaning of citations, but the general assumption is that citations represent the influences on the authors citing them (gross over-simplification, I know). However, the sources for alternative metrics vary significantly and (we assume) represent different kind of impact. You can’t compare tweets to bookmarks to an f1000 expert opinion to a blog post. Another thing is that relevant sources change. Will Twitter be a relevant source in five years? What about Mendeley? Alternative metrics are dependent on information sources that might not be sustainable.

When will alternative metrics stop being alternative? My answer is that they’ll continue to be “alternative” until you see promotion committees and funding agencies use them regularly.

Something completely different:

If you remember, I wrote a while back about the Leiden ranking. In the mathematics and computer science field there is a weird anomaly: The Turkish university Ege is in the second place, after Stanford and before Harvard. When I first saw it I assumed it was a “Shelx” case, where one or two highly impact articles allowed Ege to reach that ranking. However, a Turkish reader, Tansu Kucukoncu, wrote to me that the Ege ranking might be based on a fraud: out of the 210 publications of Ege university in the field, 65 were written by one person, Ahmet Yildirim, who has written 279 overall. Yildirim has been blamed of plagiarism by the Journal of Mathematical Physics and his article was retracted. If you want to read more about the case, I recommend a post by the Paul Wouters from Leiden, who is involved with their ranking.

Stefanie Haustein, Isabella Peters, Judit Bar-Ilan, Jason Priem, Hadas Shema, & Jens Terliesner (2013). Coverage and adoption of altmetrics sources in the bibliometric

community ISSI conference arXiv: 1304.7300v1

Altmetrics: a manifesto