ADVERTISEMENT
  About the SA Blog Network













Tetrapod Zoology

Tetrapod Zoology


Amphibians, reptiles, birds and mammals - living and extinct
Tetrapod Zoology Home

The pain of not getting cited: oversight, laziness, or malice?

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



The author laments (a photograph taken at the University of Portsmouth back in September 2009).

It’s time to republish this classic article from Tet Zoo ver 2 (originally published in September 2009). The problem I’m concerned with certainly hasn’t gone away, and in fact is on my mind right now since I’ve seen a couple of recent, egregious examples.

Those of us who publish technical research papers like to see our work cited by our colleagues. Indeed, it’s integral to one’s success as a researcher (whatever ‘success’ means) that others cite your work, in whatever context. You might not like to see the publication of a stinging attack that demolishes your cherished hypothesis and shows how your approach and data analysis (and maybe overall philosophy, intellect and ability to write) are flawed, but the fact is that someone has at least read, and is citing, your work… and that’s still a sort of success. These days – sad to say – the ‘impact factor’ of your work (that is, the amount of times it gets cited and how quickly said research takes to win citations) is seen as an important measure of how ‘good’ your science is. Speaking as someone who works in a field where century-old monographs are still among the most-cited and most important works, where the accruing of tiny bits of data can sometimes (years later) enable someone to piece together evidence for a high-impact gee-whiz bit of science, and where ‘high-impact’ papers are all but useless and frequently contain hardly any information, I think we can question the notion that ‘impact factor culture’ helps our science… However, I’ll avoid that can of worms for the time being.

Citation data for my own work (from google scholar): as is typical, my citations have seemingly built slowly over time. Note that I am not doing too bad (in terms of citations and the various metrics) for someone who is not employed in academia.

So, when you see a publication that’s very relevant to your own research, and find yourself not getting cited (or, perhaps, horrendously and obviously under-cited), what do you do? I have no idea, and – other than making sure that the offending party are aware of said research – I’m not sure what you can do, so I’m not about to provide an answer. Instead I’m going to ask a question: why do some authors or research groups fail to cite research that looks especially relevant? Having suffered from five six seven eight a few, separate recent cases of this sort of thing I think I have some answers.

Genuine oversight

My obscure articles on the history of ideas about tree-climbing in non-bird dinosaurs are... obscure. Illustration of hypothetical/speculative dinosaurs from Naish (2000).

Authors do sometimes honestly fail to become aware of key papers. I was surprised when an author failed to cite a very relevant paper (by me) – one of only about three ever published on the subject in question. When asked about the oversight he apologised and said “would have cited your paper had I known about it”. My fault for publishing in an obscure journal with no online presence, perhaps. In another case the author was embarrassed as the paper he’d missed was essentially identical in layout, theme and conclusions to his own (and for complex reasons that I can’t discuss without giving too much away, there was no suspicion of plagiarism or malice or anything like that). Alas, none of us are omniscient, and even the most-informed, most clever and best-read person may still not be aware of every single paper and article relevant to their field of special interest.

However, using such things as google and personal communication with other workers, one can generally get up to speed and ensure that nothing crucial has been missed. Furthermore, the excuse of oversight is becoming less believable and forgivable as pdf archives and online resources have become available, and as online communication and discussion have improved in general. In fact, I would go so far as to say that – if, these days, an author has failed to cite a paper that directly overlaps with the focus of their own paper, there is generally a good reason to suspect deliberate action. They are not citing you for a reason. Read on.

Acts of laziness

Suppose you're publishing a paper on the behaviour and ecology of azhdarchid pterosaurs. Could it be, perchance, perhaps appropriate to cite a paper titled 'A Reappraisal of Azhdarchid Pterosaur Functional Morphology and Paleoecology', hm? Just sayin'. Image by Mark Witton.

Of course, some authors are just lazy. Back in 2009 (when I originally wrote the text you’re reading now), I was surprised to see that two recent papers on a given subject (cough azhdarchid pterosaurs cough) both failed to cite another, very relevant paper on that same given subject that was high-impact, open access, and had high visibility thanks to extensive coverage in the media and on blogs and discussion boards. Oh, ok, I’m referring to Witton & Naish (2008).

I presume in these cases that the authors were lazy, and didn’t read around on the subject. Not much we can do about that, though you’d expect that the reviewers or editors would have brought the attention of the authors to the citation in question. See below for more on ‘editor apathy’.

Choosing not to give credit

Sometimes, things are a little less innocuous and I feel confident that authors deliberately choose not to cite relevant works, specifically because they don’t want to give credit to them. Let’s make one thing clear: there’s no need to cite all 171 published papers on the mechanics of alethinophidian snake feeding (or whatever); you can get by with citing the three recent, comprehensive reviews of the subject. But if you’re publishing a conclusion that matches that of a previous study, it seems only right to cite that study, rather than pretend that it doesn’t exist.

Furthermore, you should cite credit to previous workers if they provided the key data, or key argument, that inspired your own work. In two recent cases (post-2010), I’ve seen authors write about a given animal without citing the recent review articles that (without question whatsoever) inspired their own papers. I don’t consider this the technically correct thing to do, nor is it fair or ethical.

Some papers really are terrible and should generally be ignored (this comment is in no way necessarily connected to the image it accompanies, but might be). However, even terrible papers should be cited if their existence was integral to your research.

One more thing. There are papers out there that are execrable or fatally flawed (to use Drugmonkey’s parlance, they suck). If they overlap with the subject you’re writing about, what then? My thinking on this is that some papers really should be ignored. But others should be cited if they are specifically relevant to your argument or data: if, say, you only chose to investigate a given subject because you’d been spurred into action after reading said execrable paper.

Editor apathy

What to do about personal animosity and acts of malice? It’s well known that, within many fields of science, there are warring factions, and there are definitely some researchers and research groups that deliberately ignore the publications of other authors and research groups. There are also personal vendettas and so on. In such cases, one might argue that editors and reviewers should make an effort to get the offending party to at least credit the work of their ‘opponents’: given that editors and reviewers should (in theory) be familiar with the field in question, they certainly can’t use the excuse that they’re unaware of these areas of animosity. In my former life as a technical editor I occasionally suggested to group x that they cite the work of group y, and they usually did so once the request had been made. Maybe more of this sort of thing should occur, and when it doesn’t happen do we have editor apathy or ignorance? If ‘editor apathy’ IS a problem – what the hell are those people doing working as editors? Not sure what to do about this – suggestions?

Dead cat on a railway line. It's a metaphor.

It would be nice if we didn’t have to worry about the constant quest for citations. But we do. Like I said at the start, it does, unfortunately, make a difference as goes employability and ‘research impact’ assessments and so on. Failing to cite the appropriate work of your colleagues is also just not fair. Standard technical papers (i.e., not those destined for Nature, Science or PNAS) are not so constrained in length that a handful of extra references in the bibliography, or a few extra citations in the body of the text, make any difference, so – unless a good argument can be made that paper x was ignored because there’s something fundamentally wrong with it – working scientists have an ethical obligation to accurately reflect the state of knowledge in their field.

Another of my oft-heard laments.

Needless to say, other bloggers have covered the ‘why aren’t I being cited?’ issue as well, sometimes in more depth that I have here. Check out…

Refs – -

Naish, D. 2000. 130 years of tree-climbing dinosaurs: Archaeopteryx, ‘arbrosaurs’ and the origin of avian flight. The Quarterly Journal of the Dinosaur Society 4 (1), 20-23.

Witton, M. P. & Naish, D. 2008. A reappraisal of azhdarchid pterosaur functional morphology and paleoecology. PLoS ONE 3 (5): e2271. doi:10.1371/journal.pone.0002271

Darren Naish About the Author: Darren Naish is a science writer, technical editor and palaeozoologist (affiliated with the University of Southampton, UK). He mostly works on Cretaceous dinosaurs and pterosaurs but has an avid interest in all things tetrapod. His publications can be downloaded at darrennaish.wordpress.com. He has been blogging at Tetrapod Zoology since 2006. Check out the Tet Zoo podcast at tetzoo.com! Follow on Twitter @TetZoo.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 39 Comments

Add Comment
  1. 1. BilBy 6:22 am 02/10/2014

    “I think I’ve cited you…” Once heard at a conference as a lead up to an attempted (and failed) seduction.

    Link to this
  2. 2. Andreas Johansson 6:55 am 02/10/2014

    The caterpillar hybridogenesis paper reminds me it’s been a while since Darren did a piece on … heterodox hypotheses. Caterpillars may have too many legs, but pieces like the one on initial bipedalism (to stick with the wrong-number-of-legs theme) are great fun.

    I can’t make out the labels in the cladogram on the blue monograph, but it seems to show classical theropods as inside Aves? Someone please enlighten me.

    Link to this
  3. 3. naishd 7:01 am 02/10/2014

    Interest in ‘non-standard hypotheses’ duly noted :)

    The other paper in the bin is James & Pourtless (2009). You can read a fair bit about it in this ver 2 Tet Zoo article. Here are my specific thoughts, from the comment thread there…

    As for James & Pourtless (2009): these authors use cladistics to test the hypothesis that birds are deeply nested within coelurosaurian theropods, and argue that they use an unbiased approach where non-dinosaurian archosaurs and other reptiles are included too (they include Longisquama among archosaurs for some reason, and even imply that it’s a proto-bird [p. 37]). The paper is full of really weird claims (e.g., that theropods can only be diagnosed by their intramandibular joint) and does a lot of stuff that’s bound to skew the results: they coded all characters of disputed homology as ‘unknown’ (p. 14), for example (and, as usual among those disputing the theropod affinities of birds, they ignore evidence showing that the disputes about homology are erroneous anyway). This is wrong because it makes an a priori assumption about homology, and it introduces loads of new question marks in the matrix for character states where we do have data. Furthermore, the choice of taxa is weird: it’s wrong to analyse theropods and other archosaurs without including at least some non-theropod dinosaurs. Finally, the trees they generated are entirely uninformative (they are mostly polytomies) and don’t provide support for any hypothesis, so quite how the authors can say that they found weaknesses in the ‘birds are theropods’ hypothesis is really not apparent. As an impartial test of archosaur phylogeny, this study fails miserably.

    Link to this
  4. 4. Andreas Johansson 7:13 am 02/10/2014

    Thanks, Darren. :)

    Link to this
  5. 5. JoseD 12:47 pm 02/10/2014

    @Naishd

    Many thanks for re-posting this article as it reminds me of Roach & Brinkman 2007. I’ve complained about said paper’s many problems elsewhere (go here), but it’s worth mentioning here that, when I originally read it, I couldn’t help but think of what Hone said: “Someone either wasn’t citing them deliberately (very poor practice) or hadn’t read them (poor scholarship) neither of which is a good thing to be” (In this case, “them” = relevant papers that contradict their arguments: go here). For example, it’s implied that lone adult Komodo dragons can kill prey 10x their size w/”only ser- rated teeth”, the logic being that lone adult Deinonychus would’ve done the same. However, it’s been known since 2005 that the former are venomous (go here), hence why they can kill prey 10x their size. Also, it’s claimed that, among extant archosaurs, “truly coop- erative, coordinated hunting behavior is seen in only a few species of diurnal raptors”, ignoring ground hornbills (Fitz Simmons 1962), corvids (Maser 1975), & shrikes (Frye & Gerhardt 2001). There are plenty more where those came from (both in terms of “said paper’s many problems” & “relevant papers that contradict their arguments”).

    Link to this
  6. 6. David Marjanović 1:00 pm 02/10/2014

    I think we can question the notion that ‘impact factor culture’ helps our science…

    ~:-| It was never meant to be. It’s meant to make life easier for overworked hiring committees faced with 100 candidates for 1 job: the idea is that, instead of having to read 500 papers to determine how good each candidate is, they can use the impact factor as a proxy.

    Unfortunately, they often actually do that…

    If ‘editor apathy’ IS a problem – what the hell are those people doing working as editors? Not sure what to do about this – suggestions?

    I think editors are apathetic when they’re overworked. A contributing factor to this is that they’re not paid for being editors (or, in a few cases, paid a purely symbolic amount); they also have to publish and often to teach in order to “put some food on their family”, so they can’t devote enough time to working as editors. Perhaps, then, the solution is for publishers to pay editors a salary that’s enough for at least a part-time job.

    *waits 5 minutes till everyone has stopped laughing at full volume*

    Yeah, not gonna happen.

    “I think I’ve cited you…” Once heard at a conference as a lead up to an attempted (and failed) seduction.

    *giggle*

    Link to this
  7. 7. ectodysplasin 4:04 pm 02/10/2014

    I think there’s a worthwhile discussion over whether editors of journals should get involved in editing content. A lot of the open access journals are moving away from aggressive content editing, both because it takes a lot of work and because it has the potential to impinge on an author’s ability to communicate their ideas more directly. I’m not sure this is necessarily a good thing but this is definitely something that’s happening.

    Link to this
  8. 8. vdinets 4:14 pm 02/10/2014

    JoseD: I have a paper under review about fully cooperative, coordinated, collaborative etc. hunting by crocodiles. Phylogenetic bracketing suggests that tirannosaur hands were used for sign language during hunting :-)

    Link to this
  9. 9. Heteromeles 5:43 pm 02/10/2014

    Hmmm. Perhaps name and shame is a better way to go? Goodness knows I’ve missed citations. That’s why I use an online pseudonym–out of shame (actually that’s not the reason at all, but it’s a real embarrassment on my part). Still, if reputation is everything, don’t some deserve a reputation for not being as thorough as they could be?

    Link to this
  10. 10. BilBy 7:13 pm 02/10/2014

    Akshully, I received an ms. to review on a subject on which I have published a biggish review. The editor, who I know, said in their email ‘they don’t cite you, though they should – you should slip that in’. Some editors can be helpful.

    Link to this
  11. 11. ectodysplasin 7:18 pm 02/10/2014

    @heteromeles,

    Hmmm. Perhaps name and shame is a better way to go?

    Meh.

    Name and shame has its place, but I think that its place is in condemning actual bad behavior. Sometimes not citing a paper is the opposite of bad behavior, such as situations where a paper is not available due to paywalls and a scholar doesn’t have access to it, or situations where a scholar cannot read a paper in a language other than their own and don’t want to make false attributions.

    ultimately, citation is a record of where the claims you make have come from. They exist so someone can look up your claims and find the basis for them. It has taken on a sense of professional courtesy to other workers of saying “yes, I read your paper and I think it’s important” but that’s not the primary purpose of it. Is it annoying when you write what you consider to be an important paper on a subject and are not cited? Sure. Is it the end of the world? No.

    And in cases where citations are left out intentionally, sometimes it is to avoid getting into complexities or arguments within a paper that distract from the general point of the research report. Remember, the context portions of the paper (intro and discussion) serve to give the reader context, but the purpose of the paper itself is the methods/results, which typically have limited citations because the data is mostly novel.

    Link to this
  12. 12. naishd 7:22 pm 02/10/2014

    Name and shame? Much as I’d like to, I’d get myself into trouble.

    Link to this
  13. 13. ectodysplasin 7:40 pm 02/10/2014

    I want to name Darren Naish and shame him for making the above post

    Link to this
  14. 14. naishd 5:26 am 02/11/2014

    Thanks to all for interesting comments.

    ectodysplasin: re comment # 11, sure, we all know that it is sometimes not necessary to cite more than the bare minimum, nor is it wise to cite references that might distract the readers from your main point, etc etc etc. I think it’s pretty clear, however, that the article above concerns those cases where specifically relevant studies – even key studies that inspired the paper concerned – are not cited.

    And — you want to “shame” me for making this post? Seriously, is this something we shouldn’t be talking about? My impression has always been that it’s something that needs to be discussed more openly. We should be calling out bias and mispractice, not pretending that everything is ok because there are always other explanations.

    Link to this
  15. 15. naishd 6:42 am 02/11/2014

    (cough cough… apologies if my sarcastic-o-meter is broken today!!).

    Link to this
  16. 16. David Marjanović 9:19 am 02/11/2014

    Ah. Turns out I’m still signed in here in the museum. At home, SciAm logged me out 2 or 3 days ago – and pretends there’s no account associated with my e-mail address!

    Link to this
  17. 17. ectodysplasin 12:08 pm 02/11/2014

    @Darren,

    ectodysplasin: re comment # 11, sure, we all know that it is sometimes not necessary to cite more than the bare minimum, nor is it wise to cite references that might distract the readers from your main point, etc etc etc. I think it’s pretty clear, however, that the article above concerns those cases where specifically relevant studies – even key studies that inspired the paper concerned – are not cited.

    In the case that the study itself inspired a research effort, then yes, it should be cited, and overlooking it is a problem. But in many cases, relevant studies are excluded for a whole range of reasons that do not necessarily indicate bad behavior. Personally, I’d prefer to assume that all my colleagues are generally decent and intellectually honest people who sometimes forget things or otherwise have good reason for doing what they do. I assume this is your approach as well. But when people mention social repercussions for such mistakes, I feel like that means people are going into this looking for evidence of misbehavior, rather than looking at quality of research, and that worries me.

    And…

    (cough cough… apologies if my sarcastic-o-meter is broken today!!).

    Oh no, I was totally serious, but was referring to comment #12.

    ;)

    Link to this
  18. 18. naishd 12:19 pm 02/11/2014

    Thanks for the comment, ectodysplasin. I agree with your comments and will restate what I hoped was clear from the article: “there’s no need to cite all 171 published papers on the mechanics of alethinophidian snake feeding (or whatever); you can get by with citing the three recent, comprehensive reviews of the subject” (in other words, we should never expect colleagues to cite each of our cherished papers on every occasion), and it is those cases where authors have deliberately, knowingly excluded the citation of studies that inspired their own that are the problem.

    “Social repercussions”? Yeah, you better cite me or I won’t invite you to my parties!

    Link to this
  19. 19. naishd 12:38 pm 02/11/2014

    Ok: for fear of looking like a douchebag (or, more of a douchebag, anyway), I’ve deleted the offending ‘social repercussions’ clause. It wasn’t needed.

    Link to this
  20. 20. ectodysplasin 12:42 pm 02/11/2014

    @Darren,

    I agree with your comments and will restate what I hoped was clear from the article: “there’s no need to cite all 171 published papers on the mechanics of alethinophidian snake feeding (or whatever); you can get by with citing the three recent, comprehensive reviews of the subject” (in other words, we should never expect colleagues to cite each of our cherished papers on every occasion), and it is those cases where authors have deliberately, knowingly excluded the citation of studies that inspired their own that are the problem.

    Yes. I didn’t really have a problem with what you said. I was just checking in mid-comment train to remind folks that citations are missed for all sorts of legitimate reasons, including reasons that may not occur to someone.

    In addition, I’d generally prefer to see people cite classic and foundational papers rather than most recent reviews, because those foundational papers are the ones that contain the actual data, rather than interpretations of questionable source that have been repeated throughout the literature. The aforementioned citation on V. komodoensis feeding behavior, for example.

    As for this:

    “Social repercussions”? Yeah, you better cite me or I won’t invite you to my parties!

    Oh yeah? Then, as the internet advises, I’ll start my own parties. With blackjack and hooknosed snakes.

    Link to this
  21. 21. Yodelling Cyclist 1:45 pm 02/11/2014

    Right, this is similar to personal problem I’m having here, and I throw it to the scientific group. I’m currently writing something up involving the oxidation of nickel (a subject that has been studied incessantly since the 1920′s), and I’m confronted by some authors (1970′s vintage) citing work from the 1960′s – to which I have no access without undesired incurring cost. Based on abstracts, the 60′s results have subsequently been reproduced, expanded on and higher resolution data obtained and published. Reviews are not available.

    My options appear to be :
    1.) (Unappealing) Cite any way without reading.
    2.) (Still pretty bad) Ignore references.
    3.) Significantly delay (and miss significant deadline) while I hunt further.

    Advice?

    Link to this
  22. 22. ectodysplasin 1:48 pm 02/11/2014

    4) ask friend/colleague with library access to those papers for a copy of the PDF.

    Link to this
  23. 23. Yodelling Cyclist 1:50 pm 02/11/2014

    Oh, I live by 4, that’s been tried. Unless anyone here has access to this:

    http://dx.doi.org/10.1063/1.1753788

    Link to this
  24. 24. Yodelling Cyclist 1:53 pm 02/11/2014

    and this:

    http://dx.doi.org/10.1063/1.1753916

    Link to this
  25. 25. ectodysplasin 2:11 pm 02/11/2014

    Done and done. What’s your email?

    Link to this
  26. 26. Yodelling Cyclist 2:22 pm 02/11/2014

    I love the internet, and thank you.

    Without wishing to be weird any weirder, is there a secure way I can send my conatct details?

    Link to this
  27. 27. ectodysplasin 2:26 pm 02/11/2014

    Email it to Darren and have him pass it along to me, I guess.

    Link to this
  28. 28. ectodysplasin 2:31 pm 02/11/2014

    Also I note that one of the authors on the second paper is an H. Farnsworth. That’s now not one but two Futurama references in this comment.

    Link to this
  29. 29. Yodelling Cyclist 2:33 pm 02/11/2014

    Yes, I must admit to a frisson of joy when I realised I could cite a real H. Farnsworth.

    Emailing Darren.

    Link to this
  30. 30. naishd 3:08 pm 02/11/2014

    Aaaaaand… thank you for taking us comfortably over 23 comments. We move on.

    Though, feel free to continue the discussion here if you wish.

    Link to this
  31. 31. ectodysplasin 3:20 pm 02/11/2014

    One thing I do have to wonder about is whether the citation index approach is actually harming basic science and data reporting. Go back 30-40 years, and you’ll see very short reference lists…authors cited what they needed to and nothing more….10-15 papers, generally, max. Nowadays, we cite about 3-4 times that many, sometimes. Does this actually encourage or discourage good scholarship? Do we need to look into alternative metrics a la PLoS (view counts, download counts, etc) or novel methods e.g. a crowdsourced rating system for usefulness? Or, in the case of review papers, perhaps what we need are separate “lit cited” and “bibliography” sections, the former to provide sources of information directly in the text and the latter to more or less comprehensively report on the papers published on a subject?

    Lots to mull over here.

    Link to this
  32. 32. Heteromeles 7:08 pm 02/11/2014

    @31 ectodysplasin: It seems like one of those relative growth rate problems, in that the big foundational papers on a topic often don’t have a lot of other references, but as the field elaborates and topics diverge (not unlike this blog, for instance), you kind of need to reference more and more just to position your paper within the expanding publication list within the field. One could also argue (following in Joseph Tainter’s footsteps) that there’s a decreasing marginal return on research later in a field, where you have to do more work per publishable paper, the older the field gets and the more that the easy questions have been answered.

    Link to this
  33. 33. BilBy 8:46 pm 02/11/2014

    On a related note to citations: I have just seen a published short note in an international journal which I reviewed twice for them. In both cases they buggered up citing authors in the reference list, moving letters round, getting one name completely wrong. Both times I pointedly corrected them. The published paper has the same fkn errors, which probably means that I, and the authors of at least two other papers, now don’t get those sweet, sweet citations. Should I go raise merry hell with the Journal or just go sulk? Or both.

    Link to this
  34. 34. sharpelynda 3:59 am 02/12/2014

    There are other reasons for deliberately excluding a relevant citation. I’ve been advised that if there’s someone who you’d prefer not to review your paper (e.g. they have a reputation for being particularly critical or are personally opposed to your theory, etc.), don’t cite their work (or at least don’t include it in the first couple of paragraphs). Editors tend to select referees from these initial important citations.
    Oh ain’t science grand!

    Link to this
  35. 35. Yodelling Cyclist 8:40 am 02/12/2014

    Not desperately relevant to the conversation, I’d just like to publicly thank ectodysplasin and Darren for being stand up chaps.

    Link to this
  36. 36. David Marjanović 10:07 am 02/12/2014

    Oh no, I was totally serious, but was referring to comment #12.

    Oh, a nomenclatural misunderstanding! :-) This isn’t a forum, where everything people write is a post among equals; it’s a blog, with a post and comments. :-)

    Also I note that one of the authors on the second paper is an H. Farnsworth.

    I want to live on this planet again.

    Go back 30-40 years, and you’ll see very short reference lists…authors cited what they needed to and nothing more….10-15 papers, generally, max. Nowadays, we cite about 3-4 times that many, sometimes. Does this actually encourage or discourage good scholarship?

    I don’t think people often cite papers to do their authors a favor (self-citations excluded). I was taught to cite everything I need but no more, simply to save space. I’ve even done the “[author, year], and references therein” thing, which doesn’t do the readers a disservice, but doesn’t count towards the impact factor of those references or the journals they’re in.

    I do think reference lists generally become longer, and have to. The older a field, the more giants there are on whose shoulders you stand (or trample) – and their number increases exponentially over time as more and more universities and journals are founded. A few become irrelevant to a primary research paper because they’re only of historical interest anymore, so they drop out of the list of papers you should cite, but at the other end many more come in!

    A review paper, on the other hand, should ideally cite the entire field. On several occasions, review papers have made me aware of important literature I had completely overlooked (if only because it was in some journal I had never seen).

    The published paper has the same fkn errors, which probably means that I, and the authors of at least two other papers, now don’t get those sweet, sweet citations. Should I go raise merry hell with the Journal or just go sulk? Or both.

    Don’t journals routinely publish corrections for this kind of thing? Write to the authors that they should submit one to the journal; if they decline, write to the editor.

    Editors tend to select referees from these initial important citations.
    Oh ain’t science grand!

    Quite. Editors select referees from the experts in the field, and they don’t know most fields (nobody does), so they try to deduce from the manuscript itself who the greatest experts could be.

    That said, it is common practice to mention in your cover letter who you don’t want to see your manuscript.

    Link to this
  37. 37. ectodysplasin 7:18 pm 02/12/2014

    @Heteromeles;

    It seems like one of those relative growth rate problems, in that the big foundational papers on a topic often don’t have a lot of other references, but as the field elaborates and topics diverge (not unlike this blog, for instance), you kind of need to reference more and more just to position your paper within the expanding publication list within the field.

    This generally corresponds to more lengthy and densely-referenced introductions, though. In many cases, this is not because there is any less information being presented, but rather because there’s a whole lot more expectation that prior work will be discussed in depth. So instead of saying “here is a paper that supports what I’m stating here” you get a list of 3-7 citations all in a row representing a rather thorough discussion of the problem in all its glory and detail.

    Not saying the latter is a bad thing, but it does indicate a change in practice over the past 30 years or so.

    Link to this
  38. 38. Andrej35897561 7:07 am 02/13/2014

    Dear Daren

    I think you should relax a bit on this issue. Ignorance is not such a bad thing as it is portrayed in this post. “All knowing” on the other hand is not the best science strategie either. Some of the most acknowledged evolutionary biologists said that they are ignorant about a lot of important issues. Though, their work proved their creativity (type “JOHN MAYNARD SMITH: Seven Wonders of the World” in the youtube and hear it from him) and enormous breakthroughs in science. So, think about this in evolutionary terms. Ignorance works as a randomizing agent, and as we know, potential rate of evolution is proportional to a heritable variance of a population. It would be horrible and devastating for science if editors were as zealous as you propose for them to be. Science needs ignorance and errors in order to ensure creativity. Though, of course balance should be kept between knowledge and a lack of it.

    Link to this
  39. 39. naishd 7:09 am 02/13/2014

    Err, thanks.. though I can’t help but feel that you’ve missed the point.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Black Friday/Cyber Monday Blow-Out Sale

Enter code:
HOLIDAY 2014
at checkout

Get 20% off now! >

X

Email this Article

X