About the SA Blog Network

Information Culture

Information Culture

Thoughts and analysis related to science information, data, publication and culture.
Information Culture HomeAboutContact

Understanding the Journal Impact Factor – Part Two

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Despite its many faults (see part I), the Journal Impact Factor (JIF) is considered an influential index to a journal’s quality, and publishing in high-impact journals is essential to a researcher’s academic career.

Reminder: to calculate, for example, the 2010 JIF for a journal -

JIF= (2010 citations to 2009+2008 articles)/(no. of “citable” articles published in 2008+2009)

The JIF did start as a tool helping librarians with subscription decisions, but its influence among authors, readers and editors has increased with time, and so has the scientific community’s interest. More and more papers have been written about the JIF throughout the last thirty years (graph 1).

Graph 1: Number of papers on the JIF indexed in Web of Science, 1963–2006. (Archambault & Lariviere, 2009).


Different field, different JIF

JIFs vary widely with the discipline. Journals dealing specialized or applied areas will have, on average, lower JIFs than those in pure or fundamental areas (graph 2). The average number of article references correlates with the citation impact of each field. Biochemistry articles, for example, have twice as much citation than mathematics articles.

JIFs also correlate with the number of authors per article, because the more authors an article has, the better the chances it’ll be self-cited. A study of Lancet articles found that even among articles published in the same journal, the most-cited articles had on average 3-5 times more authors than the least cited articles. So the social sciences, with about two authors per article, have less citation impact than fundamental life sciences, with more than four authors per article. The arts and humanities journals’ JIFs are quite pitiful, because scholars in those fields rarely cited journal articles. The highest 2010 JIF in the JCR Cultural Studies category is 0.867.

Subject Variation in Impact Factors

Graph 2: Subject Variation in Impact Factors (Amin & Mabe, 2007)

TR recently launched a new product, the Book Citation Index. It currently covers 30,000 books from publication year 2005 onwards, and 10,000 new books will be added every year. Of course, that means that all the book citations prior to 2005 will still go unnoticed, but at least it’s better than nothing, and we might finally see a bit of humanities coverage.

Drowning in a one-meter-deep (on average) pool.

When researchers publish in high-impact journals, even if their own articles are rarely cited, or not at all, they still enjoy the journals’ prestige. It also works the other way around: a well-cited article can make JIFs considerably higher, especially in small journals.  A study done on three biochemistry journals showed that 50% of the journals citations came from the 15% top-cited articles, and that the top half of the most cited articles were cited ten times as much as the lower half. Articles can be published in the same journal and have a completely different scientific impact (as measured by citations, of course).

The two-year citation window

The regular citation window of the JIF is two years. It favors fast-moving fields, where articles are cited quickly but also obsolesce fast. Journals in slower-moving fields, where citations don’t accumulate quite as fast, will have higher JIFs in longer time frames.  If we look at the graph of the average JIFs for 200 chemistry journals (graph 3) we see that the five-year JIF curve is smoother, while the two-year curve varies widely. It means that journals with different two-year JIFs might have more similar impact over time.

“Letters” journals, where articles are usually short, tend to receive more citations within the two-year window. On the other hand, the accumulation of citations for review journals is slower. However, reviews tend to get so many citations that even the fraction of citations they get in the short citation window give review journals relatively high JIFs. Campanario (2011) compared two-year and five-year JIFs and found that a longer citation window increased the JIFs of about 72% of the journals, but lowered them for about 27%.

Graph 3: JIF measurement window fluctuations, 200+ Chemistry Journals (source: Amin & Mabe, 2007)


The debate about whether journal self-citations should be included in calculations of the JIF is an old one. Currently, self-citations aren’t excluded from JIFs, but journals with an exceedingly high level of self -citations are sometimes “punished” and excluded from the index for a while. The rate of journal self-citations changes according to discipline and journal, but in general, it’s about 20%. If we’re talking about a specialized journal, the number might be higher. This is the reason why editors sometimes write editorials with dozens of self-citations…

In conclusion

The JIF is a crude index of a journal’s impact (I won’t go as far as to say quality). It was devised in a certain time in history for certain uses and, well, might have been blown out of proportions. Corrections have been suggested throughout the years, but most of them stayed in bibliometric journals rather than influence the general scientific community. Many researchers see the JIF as a definitive measure, but  people have to remember that it’s is only one tool in the box of science measuring indices, and a journal JIF says very little of a single article, or researcher, quality. As Seglen (1997) said “Evaluating scientific quality is a notoriously difficult problem which has no standard solution.”



Amin, M, & Mabe, M (2007). Impact factors: use and abuse. Perspectives in Publishing.

Archambault, E., & Lariviere, V. (2009). History of the journal impact factor:

Contingencies and consequences Scientometrics (79), 635-649 DOI: 10.1007/s11192-007-2036-x

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research BMJ (314) DOI: 10.1136/bmj.314.7079.497

Kostoff, R. N. (2007). The difference between highly and poorly cited medical articles in the journal Lancet Scientometrics, 72`3, 513-520 DOI: 10.1007/s11192-007-1573-7

Campanario, J. M. (2011). Empirical study of journal impact factors obtained using
the classical two-year citation window versus a five-year citation window Scientometrics DOI: 10.1007/s11192-010-0334-1

Vanclay, J.K. (2012). Impact factor: outdated artefact or stepping-stone to journal certification? Scientometrics DOI: 10.1007/s11192-011-0561-0

Hadas Shema About the Author: Hadas Shema is an information specialist at the Israeli Inter-University Center for E-Learning (Hebrew acronym: MEITAL). She has a B.Sc. in the Life Sciences and an MA and a PhD in Library & Information Science from Bar-Ilan University, Israel. Hadas tweets at @Hadas_Shema.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 6 Comments

Add Comment
  1. 1. Jerzy v. 3.0. 7:34 am 06/26/2012

    Isnt it ironic, that scientists perfect experimental protocols to the smallest detail, but the biggest protocol of all – evaluation of the whole scientific research by impact factor – remains very general and skewed?

    Link to this
  2. 2. impactfactorsearch 5:05 pm 06/26/2012

    Please visit the below link for a simple and free journal impact factor search tool:

    Link to this
  3. 3. bswoger 10:14 am 06/27/2012

    One of the things that always concerns me is how often I hear or see comments about the JIF as a measure of the author. It really isn’t, but there seems to be a lot of misunderstanding out there.

    Link to this
  4. 4. Hadas Shema in reply to Hadas Shema 1:19 pm 06/27/2012

    I agree. Bibliometricians are always warning about the limitations of the impact factor, but does anyone listen? Nooo.

    Link to this
  5. 5. Jerzy v. 3.0. 4:24 pm 06/29/2012

    I noticed that impact factor, among other shortcomings, de facto promotes fashionable, controversial, quickly forgotten topics over discoveries which are cited for decades to come.

    Come on, the true value of Einstein’s theory of relativity was not that it provoked short-lived controversy within 2 years, but that it stayed for many decades and is increasingly accepted over time.

    Link to this
  6. 6. Hadas Shema in reply to Hadas Shema 5:25 pm 06/29/2012

    You’re right – that’s exactly the case. Fashionable fields move faster and get more citations in the two-year window, but classics can be cited for decades, without it having any effect on their publishing journal IF.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article