Raising the standard high
July 8, 2010 § 2 Comments
Jagesh Shah pointed out this Retrospective in MBoC (Drubin DG, Oster G. 2010 Experimentalist meets theoretician: a tale of two scientific cultures. Mol Biol Cell. 21 2099-101 PMID: 20444974), which tells the tale of a collaboration between theorists and experimentalists to investigate the mechanical forces involved in vesicle formation in endocytosis, an effort that resulted in an interesting theory paper (Liu J, Sun Y, Drubin DG, Oster GF. 2009 The mechanochemistry of endocytosis. PLoS Biol. 7 e1000204. PMID: 19787029).
You should remember, as you read, that the piece is published in a journal aimed at cell biologists. Some of the messages it delivers — theory really can be helpful! You can learn to talk to theorists if you try! — are primarily aimed at this audience. Other messages — mutual education, mutual respect, learn each other’s languages, patience, caffeine helps — are in my view generally applicable to collaborations of all kinds, including marriage (with the possible exception of the point about caffeine). But none the worse for that.
The most interesting issue comes up at the end. The authors, like many of our readers, ran into the problem that publishing an interdisciplinary paper is hard, and that the standards for publishing theory in biology are quite unclear: “Several reviewers wanted us to “prove” that our model was true by performing additional experiments. But this was a modeling paper, not an experimental paper. So, what should a theory paper accomplish and what should be the criteria for evaluating such an article?” It seems that the paper was rejected at least once for this kind of reason; the authors comment about the difficulty of pleasing a combination of reviewers from experimental and theoretical backgrounds. Many of you have been in the same situation and can sympathize.
The question of what the standard should be for theory papers in biology is an important, though complicated, issue for systems biologists to discuss. One point to be clear on, though it will make some people grumble, is that the standard is going to be different depending on how much recognition you are seeking for the work you have done. The paper in question ended up in PLoS Biology, a very fine journal that is aimed at a wide audience. This causes me to suspect that the journals that chose not to accept the paper were also general journals, most likely one or more of Nature, Science, or Cell. And reviewers for these journals have a strong tendency to ask for something extra in a paper before they are willing to recommmend acceptance.
Note, please, that I said “reviewers” in that last sentence, not “editors”. I’m talking about you, the scientific community. The editors of these journals (I used to be one of them) undoubtedly commit many sins, but the main sin you, the scientific community, complain about when you are wearing your author’s hat instead of your reviewer’s hat is the sin of listening to reviewers who are wrong. (According to you.) So, let’s all agree to be better reviewers, shall we? Let’s think about what kind of standard we want to set, and do our best to decide whether a paper meets that standard, and politely explain to the editors (and through them, the authors) either that the paper does meet the standard, and should be published, or that it doesn’t, and why not.
[Excuse me for a second, I feel a rant coming on. Please, would you all remember that as reviewers it is just as much your job to tell the editors of a journal why they should accept a paper as it is to tell them why they should reject it? Many, probably most, reviewers don’t do this. I blame journal clubs and paper-reading courses. The focus of graduate education is so firmly on teaching the student how to be a critical reviewer that the issue of how to be an appreciative reviewer is ignored. In my editing days, I was always surprised at how harsh young reviewers were relative to their older colleagues. In the intervening years I’ve realized that the explanation is that it takes a while to forget your training, and to appreciate that your job as a reviewer is not (or not most crucially) to pick holes, but to understand whether something new and important has been learned.
Another frustration was the tendency of reviewers to play “spot the next experiment”. Please, I wanted to say, I am not asking you whether there are more experiments along the same lines that could be done. This is biology. I already know that this paper isn’t the very last word on the subject. I want to know whether the conclusion is really important, and whether it’s well supported by the data. If the paper is about the discovery of a chemical inhibitor of protein X, which is believed to be important in disease Y, I do not need to be told that the final test of whether the inhibitor will cure disease Y would be a Phase III clinical trial. (I am not making this up. In case you’re curious, I told the authors that they didn’t need to address that comment before publication.)
Of course it’s often reasonable, nay, essential, to point out that more — sometimes much more — should be done before the authors are justified in reaching the conclusion. But as you review a paper, consider how you want your own papers to be reviewed. The standards you set will eventually — after averaging with all the other reviewers in your field, then adding random fluctuation — be applied to you. So set your standards high, but stay reasonable. And be polite.
End of rant. For now. Thank you for your patience.]
Back to theory in biology — what should the standard be? Drubin and Oster propose that “a theoretical model should organize the experimental facts, clearly state the assumptions on which it is based, make testable predictions, and present a (hopefully) new conceptual framework for thinking about a biological phenomenon. A model is not meant to be the final word on a subject but a beginning that invites empirical tests of its validity.” This is not a bad start, although it may be a bit narrow. I might be interested in reading a paper describing a new conceptual framework, even if the predictions it makes aren’t immediately testable. A theory that helps you understand the significance of experimental results (Student’s t-test, anyone?) could also be interesting. But papers that meet the proposed criteria seem very likely to be appropriate to publish in a journal that biologists would read. So that’s a start.
If I am right that the authors wanted to publish in Nature/Science/Cell, though, then the question is not what makes a theory paper appropriate for publication in a biological journal. Like it or not, the question to answer for those journals is this: what lifts a theory paper above the run of the mill?
Here’s my suggestion, and then let’s argue about it. I see two possibilities. The first is that your theory (or model, or simulation) provides a novel explanation for experimental results that are already in the literature (see, for example, the Chakraborty/Walker collaboration). This could be a specific insight, or it could be a whole new way of framing a problem. The second is that your theory makes a counter-intuitive prediction, which you have tested. (Just one. I completely understand Drubin and Oster’s irritation at being asked to “prove” their model — this is, of course, impossible. But proving it’s useful is a different matter.) The Kishony lab’s demonstration that a pair of drugs that antagonize each others’ effects select against drug-resistant strains is an example of a situation where a theoretical approach led the authors to ask an experimental question that nobody had thought to ask before. There may be other ways to demonstrate that your theory can significantly move our understanding of biology forward, but these are the two most obvious. In both cases, the subject of your theory should be of broad interest — and here we come to the potential sins of Nature/Science/Cell editors, which, as I said above, can be legion. But getting to the point of being rejected because the editors don’t understand your reviewers’ arguments in your favor would be a big step forward, wouldn’t it?
(By the way, in case of misunderstandings: none of this is intended to imply that the Liu et al. paper is run of the mill. It’s an interesting and unusual paper, and I have no idea why it ran into trouble with the referees. The authors deserve our gratitude for talking about this issue in public.)
You will note that I have said nothing about the quality of the theory qua theory. I think it is possible for the theory to be so utterly brilliant that it lifts the paper above the run of the mill without any need to appeal to experimental results. But when this happens I assume that the theoretical reviewers will argue (remember, this is part of your job as a reviewer) that the paper is so good that it deserves publication whatever the experimentalists think of it. Such papers no longer fall into the tricky category of work that must be judged by interdisciplinary criteria: they stand or fall as theory papers, and the sensible author should submit them as such.
Am I right, half-right, or horribly wrong? Should the standards be different for different kinds of non-experimental work, e.g. should physics be judged differently from computer simulation? Perhaps this is a discussion we should have at Theory Lunch one day, what do you think?
I have to say that I am rather suspicious about the idea of a “standard” for theory papers, not because I don’t think there ought to be high standards but because I rather suspect that the set of standards has only a partial order on it, at best. There may be different ways to achieve a very high standard. One paper may fit a complex model to a large set of noisy data (Spencer et al., Nature 459:428-433 2009); another may abstract a large and complex molecular network into a simple minimal model (Mettetal et al., Science 319:482-4 2008); another might prove a theorem rather than construct a model (Shinar & Feinberg, Science 327:1389-91 2010); another may develop a conceptual framework for analysing complex networks (Feret et al., PNAS 106:6453-8 2009). What these papers have in common is that they all provide new insights. The problem is further complicated by the fact that those of us who come to biology from elsewhere bring with us the peculiar disciplinary prejudices of our own fields. Experimental biologists might think that “physical scientists” are all essentially the same kind of organism but you only have to stick a physicist and a computer scientist in a room together and ask them “what is a model?” to find out that this is far from the case (as the warning notice used to say about fireworks in England, “light the blue touch paper and retire fast”). It is easy to be convinced that one’s particular background and particular experience of bumping into biology and experimental practice are unique but this is never the case. I think one should keep this in mind when reading experiences like that of Drubin and Oster, however interesting they may be. An ecumenical view of systems biology allows for many different perspectives as to what theory can or ought to do. The really important issue, I think, is what Becky says in her rant. If we do not adopt good standards as reviewers (and in this case, I firmly believe there is a total order!), then we have only ourselves to blame for the publication mess in which we find ourselves.
Thanks for the comment. I agree that it’s not easy, probably not possible, to set a single standard against which all theory/biology papers can be judged. That’s not the same as saying that discussing possible standards is pointless; and it’s not even a special property of theory/biology papers. It wouldn’t be easy to set a single standard for any field. I believe we should still discuss and think about what kinds of insights theory can provide, and what makes one theory/biology paper more interesting/important than the next. If we don’t have this discussion, each individual reviewer will draw the line based on their own interests. Of course they will still do this even after a vigorous debate on the value of diverse types of theoretical approaches, but at least they will do it with more awareness of the existence of other points of view.
IMHO, Drubin and Oster have identified a real issue, which is that many of the people who end up reviewing these papers feel that models can, and should, be proved before publication. The question the referees really want answered is whether a model is saying something important about biological reality, but the way they are asking it is wrong. Another way this comes up is when reviewers assert that a given model is “too complex” or “too simple”.
Incidentally — 3 of the 4 papers you selected to highlight were published in Science or Nature. Does that mean that those journals are actually doing a pretty good job in selecting good stuff, despite all our complaining? [Or does it mean that you picked the easiest examples?]