Skip to main content

Understanding the Journal Impact Factor Part One

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


The journals in which scientists publish can make or break their career. A scientist must publish in "leading" journals, with high Journal Impact Factor (JIF), (you can see it presented proudly on high-impact journals' websites). The JIF has gone popular partly because it gives an "objective" measure of a journal's quality and partly because it's a neat little number which is relatively easy to understand. It's widely used by academic librarians, authors, readers and promotion committees.

Raw citation counts emerged at the 20's of the previous century and were used mainly by science librarians who wanted to save money and shelf space by discovering which journals make the best investment in each field. This method had a modest success, but it didn't gain much momentum until the sixties. That could be because said librarians had to count citations by hand.

In 1955, Eugene Garfield published a paper in Science where he discussed the idea of an Impact Factor based on citations for the first time. By 1964, he and his partners published the Science Citation Index (SCI). (Of course, this is a very short, simplistic account of events. Paul Wouters' PhD, The Citation Culture, has an excellent, detailed account of the creation of the SCI). About that time, Irving H. Sherman and Garfield created the JIF with the intention of using it to select journals for the SCI. The SCI was eventually bought by the Thomson-Reuters giant (TR).


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Eugene Garfield explains how to use the Science Citation Index, 1967.

When calculating the JIF, one takes into account the overall number of citations the journal received in a certain year for the two previous years and divides them by the number of items the Journal Citation Report (JCR) considers "citable" and were published that year. TR offer 5-year JIFs as well, but the 2-year JIF is the decisive one.

Example:

JIF= (2011 citations to 2010+2009 articles)/(no. of "citable" articles published in 2009+2010)

The JIF wasn't meant to make comparison across disciplines. That is because every discipline has a different size and different citation behavior (e.g. mathematicians tend to cite less, biologists tend to cite more). The journal Cell has a 2010 JIF of 32.406, while Acta Mathematica, the journal with the highest 2010 JIF in the Mathematics category, has a JIF of 4.864.

Due to limited resources, the JCR covers about 8,000 science and technology journals and about 2,650 journals in the social sciences. It's a large database, but still covers only a fraction of the world's research journals. If a journal is not in the JCR database, not only all the citations to it are lost, but all the citations articles in that journal give to journals in the database are lost as well. Another coverage problem is that having been created in the US, the JCR has an American and English-language bias.

Manipulating the impact factor

Given the importance of the IF for prestige and subscriptions, it was expected that journals will try to affect it.

In 1997, the Journal Leukemia was caught red-handed trying to boost its JIF by asking authors to cite more Leukemia articles. This is a very crude (but if they wouldn't have gotten caught, very effective) method of increasing the JIF. Journal self-citations can be completely legitimate – if one publishes in a certain journal, it makes sense said journal published other articles about the same subject –when done on purpose, however, it's less than kosher, and messes with the data (if you want to stay on an information scientist's good side, do NOT mess with the data!). Part of the reason everyone has been trying to find alternatives to the JIF is that it's so susceptible to manipulations (and that finding alternatives has become our equivalent of sport).

A better method to improve the JIF is to eliminate sections of the journal which publish items the JCR counts as "citable" but are rarely cited. This way the number of citations (the numerator) remains almost the same, but the number of citable items (the denominator) goes down considerably. In 2010, the journal manager and the chair of the journal's steering committee of The Canadian Field-Naturalist sent a letter to Nature titled "Don't dismiss journals with low impact factor" where they detailed how the journal's refusal to eliminate a rarely cited 'Notes' section lowered their JIF. The editors can publish more review articles, which are better cited, or publish longer articles, which are usually better cited as well. If the journal is cyberspace-only, they won't even have to worry about the thickness of the issues. The JIF doesn't consider letters, editorials, etc. as citable items, but if they are cited the citation is considered as part of the journal's overall citation count. However, the number of the journal's citable items remains the same.

The JIF doesn't have to increase through deliberate manipulation. The journal Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature's 2010IF is 36.104. The rise of the IF happened after a paper called "A short history of SHELX" was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012). The article abstract says: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination."

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

All this doesn't mean that the JIF isn't a valid index, or that it has to be discarded, but it does mean it has to be used with caution and in combination with other indices as well as peer reviews.

Note: I assumed the writers of the The Canadian Field-Naturalist letter were the journal's editors, which turned out to be a wrong assumption (see below comment by Jay Fitzsimmons). I fixed the post accordingly.

Note 2: My professor, Judit Bar-Ilan, read through the post and noted two mistakes - first, the JIF, of course, is calculated by dividing the citations for the two previous years by the items of the year after, and not the way I wrote it. Second, while the first volumes of the SCI contained citations to 1961 articles, they were published in 1964 and not in 1961. I apologize for the mistakes.

Posts about the JIF by Bora

The Impact Factor folly

Measuring scientific impact where it matters

Why does Impact Factor persist more strongly in smaller countries

References/further reading

Bar-Ilan, J. (2012). Journal report card Scientometrics DOI: 10.1007/s11192-012-0671-3

Fitzsimmons J.M. & Skevington, J.H. (2010). Metrics: don’t dismiss journals with a low impact factor. Nature, 466, 179.

Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA-Journal of the American Medical Association, 295(1), 90-93.

Seglen, P.O. (1997). Why the Impact Factor of journals should not Be used for evaluating research. British Medical Journal 314, 498–502.

Wouters, P. (1999). The citation culture. Unpublished Ph.D. thesis, University of Amsterdam, Amsterdam.

http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/#tab2