About the SA Blog Network

Information Culture

Information Culture

Thoughts and analysis related to science information, data, publication and culture.
Information Culture HomeAboutContact

Understanding the Journal Impact Factor – Part One

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

The journals in which scientists publish can make or break their career.  A scientist must publish in “leading” journals, with high Journal Impact Factor (JIF), (you can see it presented proudly on high-impact journals’ websites). The JIF has gone popular partly because it gives an “objective” measure of a journal’s quality and partly because it’s a neat little number which is relatively easy to understand. It’s widely used by academic librarians, authors, readers and promotion committees.

Raw citation counts emerged at the 20′s of the previous century and were used mainly by science librarians who wanted to save money and shelf space by discovering which journals make the best investment in each field. This method had a modest success, but it didn’t gain much momentum until the sixties. That could be because said librarians had to count citations by hand.

In 1955, Eugene Garfield published a paper in Science where he discussed the idea of an Impact Factor based on citations for the first time. By 1964, he and his partners published the Science Citation Index (SCI). (Of course, this is a very short, simplistic account of events. Paul Wouters’ PhD, The Citation Culture, has an excellent, detailed account of the creation of the SCI). About that time, Irving H. Sherman and Garfield created the JIF with the intention of using it to select journals for the SCI. The SCI was eventually bought by the Thomson-Reuters giant (TR).

Eugene Garfield explains how to use the Science Citation Index, 1967.

When calculating the JIF, one takes into account the overall number of citations the journal received in a certain year for the two previous years and divides them by the number of items the Journal Citation Report (JCR) considers “citable” and were published that year. TR offer 5-year JIFs as well, but the 2-year JIF is the decisive one.

JIF= (2011 citations to 2010+2009 articles)/(no. of “citable” articles published in 2009+2010)

The JIF wasn’t meant to make comparison across disciplines. That is because every discipline has a different size and different citation behavior (e.g. mathematicians tend to cite less, biologists tend to cite more). The journal Cell has a 2010 JIF of 32.406, while Acta Mathematica, the journal with the highest 2010 JIF in the Mathematics category, has a JIF of 4.864.

Due to limited resources, the JCR covers about 8,000 science and technology journals and about 2,650 journals in the social sciences. It’s a large database, but still covers only a fraction of the world’s research journals. If a journal is not in the JCR database, not only all the citations to it are lost, but all the citations articles in that journal give to journals in the database are lost as well. Another coverage problem is that having been created in the US, the JCR has an American and English-language bias.

Manipulating the impact factor

Given the importance of the IF for prestige and subscriptions, it was expected that journals will try to affect it.

In 1997, the Journal Leukemia was caught red-handed trying to boost its JIF by asking authors to cite more Leukemia articles. This is a very crude (but if they wouldn’t have gotten caught, very effective) method of increasing the JIF. Journal self-citations can be completely legitimate – if one publishes in a certain journal, it makes sense said journal published other articles about the same subject –when done on purpose, however, it’s less than kosher, and messes with the data (if you want to stay on an information scientist’s good side, do NOT mess with the data!). Part of the reason everyone has been trying to find alternatives to the JIF is that it’s so susceptible to manipulations (and that finding alternatives has become our equivalent of sport).

A better method to improve the JIF is to eliminate sections of the journal which publish items the JCR counts as “citable” but are rarely cited. This way the number of citations (the numerator) remains almost the same, but the number of citable items (the denominator) goes down considerably. In 2010, the journal manager and the chair of the journal’s steering committee of The Canadian Field-Naturalist sent a letter to Nature titled “Don’t dismiss journals with low impact factor” where they detailed how the journal’s refusal to eliminate a rarely cited ‘Notes’ section lowered their JIF. The editors can publish more review articles, which are better cited, or publish longer articles, which are usually better cited as well. If the journal is cyberspace-only, they won’t even have to worry about the thickness of the issues. The JIF doesn’t consider letters, editorials, etc. as citable items, but if they are cited the citation is considered as part of the journal’s overall citation count. However, the number of the journal’s citable items remains the same.

The JIF doesn’t have to increase through deliberate manipulation. The journal Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature’s 2010 IF is 36.104. The rise of the IF happened after a paper called “A short history of SHELX” was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012). The article abstract says: “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.”

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

All this doesn’t mean that the JIF isn’t a valid index, or that it has to be discarded, but it does mean it has to be used with caution and in combination with other indices as well as peer reviews.

Note: I assumed the writers of the The Canadian Field-Naturalist letter were the journal’s editors, which turned out to be a wrong assumption (see below comment by Jay Fitzsimmons). I fixed the post accordingly.

Note 2: My professor, Judit Bar-Ilan, read through the post and noted two mistakes – first, the JIF, of course, is calculated by dividing the citations for the two previous years by the items of the year after, and not the way I wrote it. Second, while the first volumes of the SCI contained citations to 1961 articles, they were published in 1964 and not in 1961. I apologize for the mistakes.

Posts about the JIF by Bora

The Impact Factor folly

Measuring scientific impact where it matters

Why does Impact Factor persist more strongly in smaller countries
References/further reading

Bar-Ilan, J. (2012). Journal report card Scientometrics DOI: 10.1007/s11192-012-0671-3

Fitzsimmons J.M. & Skevington, J.H. (2010). Metrics: don’t dismiss journals with a low impact factor. Nature, 466, 179.

Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA-Journal of the American Medical Association, 295(1), 90-93.

Seglen, P.O. (1997). Why the Impact Factor of journals should not Be used for evaluating research. British Medical Journal 314, 498–502.

Wouters, P. (1999). The citation culture. Unpublished Ph.D. thesis, University of Amsterdam, Amsterdam.




Hadas Shema About the Author: Hadas Shema is an information specialist at the Israeli Inter-University Center for E-Learning (Hebrew acronym: MEITAL). She has a B.Sc. in the Life Sciences and an MA and a PhD in Library & Information Science from Bar-Ilan University, Israel. Hadas tweets at @Hadas_Shema.

The views expressed are those of the author and are not necessarily those of Scientific American.

Previous: You have to share More
Information Culture
Next: Wikipedia + Journal articles

Rights & Permissions

Comments 7 Comments

Add Comment
  1. 1. Bee 4:52 am 05/8/2012

    “The JIF doesn’t to increase through deliberate manipulation.” ?

    Link to this
  2. 2. Hadas Shema in reply to Hadas Shema 7:16 am 05/8/2012

    Fixed, thank you!

    Link to this
  3. 3. jtdwyer 9:34 am 05/8/2012

    Very good report on a very interesting subject. While varying degrees of journal status has become apparent to me as a lay person encountering the business of science, I didn’t realize it had been so formally established. I should have guessed… Thanks!

    Link to this
  4. 4. Hadas Shema in reply to Hadas Shema 11:33 am 05/8/2012

    Thank you!

    Link to this
  5. 5. jayfitzsimmons 11:55 am 05/8/2012

    Excellent review of the Impact Factor. And thank you for citing the letter I co-authored – I owe you a drink if you ever come to Ottawa, Canada :) . Just a small correction, neither my co-author (Jeff Skevington) nor I are the editor-in-chief of The Canadian Field-Naturalist. I am the Journal Manager, and Jeff was the chair of the steering committee that was looking for (and found) a new editor-in-chief after our previous editor retired. Our titles were not listed in the letter so you couldn’t have known we weren’t editors. We’re a volunteer-run non-profit journal competing against mega-publishers, but we’re doing all right. The past two years have been a major transition for us, and we have more improvements planned for our journal over the next year (e.g., I’m setting up a manuscript management portal so authors can track their manuscripts’ progress through the editorial & review process).

    The Impact Factor is annoying because it is flawed but influential. As a researcher myself, I try to publish in high-impact journals because that’s the only way I will get a job. But it annoys me to do so because I am well aware of the shortcomings of the Impact Factor, and how vulnerable it is to manipulation as you point out (also see Falagas & Alexiou 2008 for a good review). So I have to play the game even though I know it’s a stupid game. Sigh … back to my work (analysis for a paper I’m hoping to submit to a high-impact ecology journal).

    Jay Fitzsimmons

    Falagas, M. E. and V. G. Alexiou. 2008. The top-ten in journal impact factor manipulation. Archivum Immunologiae et Therapiae Experimentalis 56:223-226.

    Link to this
  6. 6. Hadas Shema in reply to Hadas Shema 8:35 pm 05/8/2012

    Thank you! It was my pleasure. Your letter is an excellent example of the influence the JIF has on journals. I agree it’s extremely hard to compete in a mega-publishers world, but on the up side, being non-for-profit allows organizations to make decisions that aren’t necessarily financially beneficial, unlike the commercial journals. The JIF is problematic mainly because people become slaves to it, rather than use it as a helpful tool, and we find ourselves playing the game, whether we want to or not.

    Also, thank you for the correction – I’ll fix things as soon as I’m done typing here.


    Link to this
  7. 7. impactfactorsearch 2:56 pm 05/24/2012

    Please visit the below link for a simple and free journal impact factor search tool:

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article