Skip to main content

The pain of not getting cited: oversight, laziness, or malice?

Its time to republish this classic article from Tet Zoo ver 2 (originally published in September 2009). The problem Im concerned with certainly hasnt gone away, and in fact is on my mind right now since Ive seen a couple of recent, egregious examples.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


It’s time to republish this classic article from Tet Zoo ver 2 (originally published in September 2009). The problem I’m concerned with certainly hasn’t gone away, and in fact is on my mind right now since I’ve seen a couple of recent, egregious examples.

Those of us who publish technical research papers like to see our work cited by our colleagues. Indeed, it’s integral to one’s success as a researcher (whatever ‘success’ means) that others cite your work, in whatever context. You might not like to see the publication of a stinging attack that demolishes your cherished hypothesis and shows how your approach and data analysis (and maybe overall philosophy, intellect and ability to write) are flawed, but the fact is that someone has at least read, and is citing, your work… and that’s still a sort of success. These days – sad to say – the ‘impact factor’ of your work (that is, the amount of times it gets cited and how quickly said research takes to win citations) is seen as an important measure of how ‘good’ your science is. Speaking as someone who works in a field where century-old monographs are still among the most-cited and most important works, where the accruing of tiny bits of data can sometimes (years later) enable someone to piece together evidence for a high-impact gee-whiz bit of science, and where ‘high-impact’ papers are all but useless and frequently contain hardly any information, I think we can question the notion that ‘impact factor culture’ helps our science… However, I’ll avoid that can of worms for the time being.

So, when you see a publication that’s very relevant to your own research, and find yourself not getting cited (or, perhaps, horrendously and obviously under-cited), what do you do? I have no idea, and – other than making sure that the offending party are aware of said research – I’m not sure what you can do, so I’m not about to provide an answer. Instead I’m going to ask a question: why do some authors or research groups fail to cite research that looks especially relevant? Having suffered from fivesix


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


seveneight a few, separate recent cases of this sort of thing I think I have some answers.

Genuine oversight

Authors do sometimes honestly fail to become aware of key papers. I was surprised when an author failed to cite a very relevant paper (by me) – one of only about three ever published on the subject in question. When asked about the oversight he apologised and said “would have cited your paper had I known about it”. My fault for publishing in an obscure journal with no online presence, perhaps. In another case the author was embarrassed as the paper he’d missed was essentially identical in layout, theme and conclusions to his own (and for complex reasons that I can’t discuss without giving too much away, there was no suspicion of plagiarism or malice or anything like that). Alas, none of us are omniscient, and even the most-informed, most clever and best-read person may still not be aware of every single paper and article relevant to their field of special interest.

However, using such things as google and personal communication with other workers, one can generally get up to speed and ensure that nothing crucial has been missed. Furthermore, the excuse of oversight is becoming less believable and forgivable as pdf archives and online resources have become available, and as online communication and discussion have improved in general. In fact, I would go so far as to say that – if, these days, an author has failed to cite a paper that directly overlaps with the focus of their own paper, there is generally a good reason to suspect deliberate action. They are not citing you for a reason. Read on.

Acts of laziness

Of course, some authors are just lazy. Back in 2009 (when I originally wrote the text you’re reading now), I was surprised to see that two recent papers on a given subject (cough azhdarchid pterosaurs cough) both failed to cite another, very relevant paper on that same given subject that was high-impact, open access, and had high visibility thanks to extensive coverage in the media and on blogs and discussion boards. Oh, ok, I'm referring to Witton & Naish (2008).

I presume in these cases that the authors were lazy, and didn’t read around on the subject. Not much we can do about that, though you’d expect that the reviewers or editors would have brought the attention of the authors to the citation in question. See below for more on ‘editor apathy’.

Choosing not to give credit

Sometimes, things are a little less innocuous and I feel confident that authors deliberately choose not to cite relevant works, specifically because they don’t want to give credit to them. Let’s make one thing clear: there’s no need to cite all 171 published papers on the mechanics of alethinophidian snake feeding (or whatever); you can get by with citing the three recent, comprehensive reviews of the subject. But if you’re publishing a conclusion that matches that of a previous study, it seems only right to cite that study, rather than pretend that it doesn’t exist.

Furthermore, you should cite credit to previous workers if they provided the key data, or key argument, that inspired your own work. In two recent cases (post-2010), I’ve seen authors write about a given animal without citing the recent review articles that (without question whatsoever) inspired their own papers. I don’t consider this the technically correct thing to do, nor is it fair or ethical.

One more thing. There are papers out there that are execrable or fatally flawed (to use Drugmonkey's parlance, they suck). If they overlap with the subject you’re writing about, what then? My thinking on this is that some papers really should be ignored. But others should be cited if they are specifically relevant to your argument or data: if, say, you only chose to investigate a given subject because you’d been spurred into action after reading said execrable paper.

Editor apathy

What to do about personal animosity and acts of malice? It’s well known that, within many fields of science, there are warring factions, and there are definitely some researchers and research groups that deliberately ignore the publications of other authors and research groups. There are also personal vendettas and so on. In such cases, one might argue that editors and reviewers should make an effort to get the offending party to at least credit the work of their ‘opponents’: given that editors and reviewers should (in theory) be familiar with the field in question, they certainly can’t use the excuse that they’re unaware of these areas of animosity. In my former life as a technical editor I occasionally suggested to group x that they cite the work of group y, and they usually did so once the request had been made. Maybe more of this sort of thing should occur, and when it doesn’t happen do we have editor apathy or ignorance? If ‘editor apathy’ IS a problem – what the hell are those people doing working as editors? Not sure what to do about this – suggestions?

It would be nice if we didn’t have to worry about the constant quest for citations. But we do. Like I said at the start, it does, unfortunately, make a difference as goes employability and ‘research impact’ assessments and so on. Failing to cite the appropriate work of your colleagues is also just not fair. Standard technical papers (i.e., not those destined for Nature, Science or PNAS) are not so constrained in length that a handful of extra references in the bibliography, or a few extra citations in the body of the text, make any difference, so – unless a good argument can be made that paper x was ignored because there’s something fundamentally wrong with it – working scientists have an ethical obligation to accurately reflect the state of knowledge in their field.

Needless to say, other bloggers have covered the 'why aren't I being cited?' issue as well, sometimes in more depth that I have here. Check out...

Refs - -

Naish, D. 2000. 130 years of tree-climbing dinosaurs: Archaeopteryx, ‘arbrosaurs’ and the origin of avian flight. The Quarterly Journal of the Dinosaur Society 4 (1), 20-23.

Witton, M. P. & Naish, D. 2008. A reappraisal of azhdarchid pterosaur functional morphology and paleoecology. PLoS ONE 3 (5): e2271. doi:10.1371/journal.pone.0002271

Darren Naish is a science writer, technical editor and palaeozoologist (affiliated with the University of Southampton, UK). He mostly works on Cretaceous dinosaurs and pterosaurs but has an avid interest in all things tetrapod. His publications can be downloaded at darrennaish.wordpress.com. He has been blogging at Tetrapod Zoology since 2006. Check out the Tet Zoo podcast at tetzoo.com!

More by Darren Naish