Skip to main content

The Leiden University Ranking

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


The new Leiden Ranking (LR) has just been published, and I would like to talk a bit about its indicators, what it represents and equally important - what it doesn’t represent. The LR is a purely bibliometrical ranking, based on data from Thomson-Reuters’ Web of Science database (there’s another bibliometrical ranking, Scimago, but it’s based on Elsevier’s Scopus). It ranks the 500 universities which have the largest publication output, and looks at two things: impact and collaboration. In this post I focus on the impact indicators.

According to the LR, the university with the highest impact is MIT. The overall ranking is ruled by American universities in the first 20 places (other than Ecole Polytech Fédérale Lausanne at the 13th place). The next non-American institutes on the list are the Weizmann Institute of Science (23rd) and the University of Cambridge (24th). The ranking focuses on reviews, research articles and letters, though letters have less of a weight than the other two in the calculations. It’s important to note that the LR doesn’t deal with Arts and Humanities, because Web of Science doesn’t cover these disciplines well, and the LR people acknowledge its limitations. If your university has a large, excellent philosophy department, it won’t help its ranking one bit. Leiden works with several main impact indicators:

Mean Citation Score (MCS) – the average number of citations to the university’s publications.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Mean Normalized Citation Score (MNCS) – the average number of citations to the university’s publications, but with taking into account publication years, document types (reviews receive, on average, more citations than articles and letters).

Proportion top 10% publications (PP(top 10%)) – publications from the same year, field and document type are compared and the indicator is calculated according to the proportion of the university’s publications which belong to the 10% most frequently cited.

That SHELX publication

In my first Impact Factor post I wrote about the case of “A short history of SHELX”. SHELX is, according to its website, a set of programs for the determination of small (SM) and macromolecular (MM) crystal structures by single crystal X-ray and neutron diffraction.” That article sky-rocketed the journal Acta Crystallographica Section A’s Impact Factor from about 2 to over 50. In the previous LR (2011/2012) it has been skewing the University of Göttingen’s MNCS. Normally, the PP(top 10%)) and the MNCS are strongly correlated, but in Göttingen’s case, it was ranked 2nd according to its MNCS and 238th according to its PP(top 10%)). It single-handedly increased Göttingen’s MNCS from 1.09 to 2.04. The PP(top 10%)), on the other hand, was not influenced, because it’s not sensitive to the exact number of citations. A paper which barely made it to the top 10% and the SHELX paper are the same to it.

Full and fractional counting

The LR counts publications in two ways: in the full counting method, all of the university’s publications (except for letters) have equal weight. In the fractional method, publications which are collaborations receive less weight than those which weren’t written in collaboration with other institutes. If an article has five authors and two of them are from the same university, the fraction given to that university is 0.4. The LR people prefer the fractional method, because this way they don’t count collaborative publications multiple times. The differences between the two methods are field-dependent. In clinical medicine, the differences can be substantial; in chemistry, engineering and mathematics they are rather weak (I assume that is because there are more collaborations in clinical medicine). Almost all universities have a higher PP(top 10%)) in the full counting method than in the fractional counting method (high-impact publications are often products of collaborations). If we look at Mount Sinai School of Medicine, for example, we’ll find its goes from PP(top 10%)) 19.2% in full counting to 15.4% in fractional counting (all the numbers based on the 2011/12 ranking)

The language bias

About 2% of the publications LR is based on are not written in English. They are mostly written in German, Chinese and French. Publications in languages other than English normally have low impact, a result of a smaller reader pool. Though most universities can take or leave the impact of non-English publications, as it doesn’t affect their ranking, some French and German universities benefit significantly from excluding non-English publications. The Paris Descartes University goes from PP(top 10%)) of 9.9% to 11.9%, and the German University of Ulm goes from 9.9% to 11.1% (again, numbers are from the 2011/12 ranking).

The Leiden Ranking is exactly what it says on the tin: a bibliometrical ranking. It’s not interested in the quality of teaching in an institute or how many Nobel Laureates it has among its professors, like the Shanghai Ranking. It isn’t partly based on a survey like the Times Higher Education Ranking (THE). It’s also not much help for prospective undergraduate students. It’s about publications, citations, and co-authorships, and should be treated as such.

ETA: I was just corrected: there are two other bibliometric-based rankings. One is the "University Ranking by Academic Performance" and the other is the "Performance Ranking of Scientific Papers for World Universities" (Hat tip: Isidro Aguillo, @isidroaguillo).

Ludo Waltman, Clara Calero-Medina, Joost Kosten, Ed C. M. Noyons, Robert J. W. Tijssen, Nees Jan van Eck, Thed N. van Leeuwen, Anthony F. J. van Raan, Martijn S. Visser, & Paul Wouters (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and

interpretation ArXiv arXiv: 1202.3941v1