June 24, 2012 | 6
Despite its many faults (see part I), the Journal Impact Factor (JIF) is considered an influential index to a journal’s quality, and publishing in high-impact journals is essential to a researcher’s academic career.
Reminder: to calculate, for example, the 2010 JIF for a journal -
JIF= (2010 citations to 2009+2008 articles)/(no. of “citable” articles published in 2008+2009)
The JIF did start as a tool helping librarians with subscription decisions, but its influence among authors, readers and editors has increased with time, and so has the scientific community’s interest. More and more papers have been written about the JIF throughout the last thirty years (graph 1).
Different field, different JIF
JIFs vary widely with the discipline. Journals dealing specialized or applied areas will have, on average, lower JIFs than those in pure or fundamental areas (graph 2). The average number of article references correlates with the citation impact of each field. Biochemistry articles, for example, have twice as much citation than mathematics articles.
JIFs also correlate with the number of authors per article, because the more authors an article has, the better the chances it’ll be self-cited. A study of Lancet articles found that even among articles published in the same journal, the most-cited articles had on average 3-5 times more authors than the least cited articles. So the social sciences, with about two authors per article, have less citation impact than fundamental life sciences, with more than four authors per article. The arts and humanities journals’ JIFs are quite pitiful, because scholars in those fields rarely cited journal articles. The highest 2010 JIF in the JCR Cultural Studies category is 0.867.
TR recently launched a new product, the Book Citation Index. It currently covers 30,000 books from publication year 2005 onwards, and 10,000 new books will be added every year. Of course, that means that all the book citations prior to 2005 will still go unnoticed, but at least it’s better than nothing, and we might finally see a bit of humanities coverage.
Drowning in a one-meter-deep (on average) pool.
When researchers publish in high-impact journals, even if their own articles are rarely cited, or not at all, they still enjoy the journals’ prestige. It also works the other way around: a well-cited article can make JIFs considerably higher, especially in small journals. A study done on three biochemistry journals showed that 50% of the journals citations came from the 15% top-cited articles, and that the top half of the most cited articles were cited ten times as much as the lower half. Articles can be published in the same journal and have a completely different scientific impact (as measured by citations, of course).
The two-year citation window
The regular citation window of the JIF is two years. It favors fast-moving fields, where articles are cited quickly but also obsolesce fast. Journals in slower-moving fields, where citations don’t accumulate quite as fast, will have higher JIFs in longer time frames. If we look at the graph of the average JIFs for 200 chemistry journals (graph 3) we see that the five-year JIF curve is smoother, while the two-year curve varies widely. It means that journals with different two-year JIFs might have more similar impact over time.
“Letters” journals, where articles are usually short, tend to receive more citations within the two-year window. On the other hand, the accumulation of citations for review journals is slower. However, reviews tend to get so many citations that even the fraction of citations they get in the short citation window give review journals relatively high JIFs. Campanario (2011) compared two-year and five-year JIFs and found that a longer citation window increased the JIFs of about 72% of the journals, but lowered them for about 27%.
The debate about whether journal self-citations should be included in calculations of the JIF is an old one. Currently, self-citations aren’t excluded from JIFs, but journals with an exceedingly high level of self -citations are sometimes “punished” and excluded from the index for a while. The rate of journal self-citations changes according to discipline and journal, but in general, it’s about 20%. If we’re talking about a specialized journal, the number might be higher. This is the reason why editors sometimes write editorials with dozens of self-citations…
The JIF is a crude index of a journal’s impact (I won’t go as far as to say quality). It was devised in a certain time in history for certain uses and, well, might have been blown out of proportions. Corrections have been suggested throughout the years, but most of them stayed in bibliometric journals rather than influence the general scientific community. Many researchers see the JIF as a definitive measure, but people have to remember that it’s is only one tool in the box of science measuring indices, and a journal JIF says very little of a single article, or researcher, quality. As Seglen (1997) said “Evaluating scientific quality is a notoriously difficult problem which has no standard solution.”
Amin, M, & Mabe, M (2007). Impact factors: use and abuse. Perspectives in Publishing.
Archambault, E., & Lariviere, V. (2009). History of the journal impact factor:
Contingencies and consequences Scientometrics (79), 635-649 DOI: 10.1007/s11192-007-2036-x
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research BMJ (314) DOI: 10.1136/bmj.314.7079.497
Kostoff, R. N. (2007). The difference between highly and poorly cited medical articles in the journal Lancet Scientometrics, 72`3, 513-520 DOI: 10.1007/s11192-007-1573-7
Campanario, J. M. (2011). Empirical study of journal impact factors obtained using
the classical two-year citation window versus a five-year citation window Scientometrics DOI: 10.1007/s11192-010-0334-1
Vanclay, J.K. (2012). Impact factor: outdated artefact or stepping-stone to journal certification? Scientometrics DOI: 10.1007/s11192-011-0561-0