Why boycott Elsevier?
February 23, 2012 § 16 Comments
Lots of people have suggested that I write something about the recent effort to boycott Elsevier. I don’t usually like to write about the politics of science, mostly because I usually don’t have much to say that hasn’t already been said. For this particular bit of science politics, though, the problem may be that I have too much to say. I’ve been in and out of science publishing for much of my working life. I worked for a company that was acquired by Elsevier, and left the company (and publishing) as an indirect consequence of the acquisition. Two of the journals I put blood, sweat and (occasionally) tears into getting started are now published by Elsevier. I have friends who still work for Elsevier. I have friends who left Elsevier-acquired journals for Open Access journals. Nothing I might say can be considered unbiased. At the same time, perhaps I know a bit more about the topic than some of the other people writing about it. Or at least I know different things. So be warned: this may be a long one.
For those who haven’t heard about the Boycott Elsevier movement, it started when the Fields medalist Timothy Gowers announced a policy of non-violent non-cooperation with Elsevier. Triggered, I think, by Elsevier’s support of the Research Works Act, which as I have mentioned before aims to prevent the US government from encouraging or requiring open access to government-funded research, he decided to publicly announce that he would not publish in Elsevier journals, nor would he serve on editorial boards or as a reviewer for them. In singling out Elsevier for this treatment, he pointed out that Elsevier is especially rapacious in the matter of charging for their journals, and uses a sharp business practice called “bundling” (forcing libraries to buy bundles of journals in order to get the most popular journals) to maximize the subscription fees they receive. Because Elsevier, through luck or good judgment or shrewd acquisition, has quite a number of popular journals in its stable it can bully libraries in this way quite effectively. It is a very profitable company; in 2010 it had revenues of $3.2 billion, of which 36% was profit. That puts Elsevier way ahead of the average publishing company, which (in 2008) made an average return of 7.9%.
Gowers points out that it’s not useful to characterize Elsevier’s actions as immoral: they are a large company, they have shareholders, and in a capitalist world there is nothing unusually evil about doing your legal best to maximize your profits and protect your interests, including lobbying Congress to pass laws that work in your favor. But he is personally offended by Elsevier’s abuse of power, however legal and corporately appropriate it may be, and so he is, in essence, withdrawing his labor. At the time of writing, over 6700 people have promised to do the same. 1275 of these signatories identify themselves as mathematicians, and 994 as biologists.
Of course it’s absolutely reasonable and right to withdraw your support from any enterprise that morally offends you. I take my virtual hat off to Dr. Gowers and his fellow travellers. Considered as an effort to damage Elsevier and thus change the world of scientific publishing, though, this movement brings to mind a couple of questions. The first is, when is it OK for a publisher to make money on publishing scientific journals, and when profit is OK how much profit is reasonable? The second is, does this effort have a chance of working? Neither of these questions has a simple answer, to my mind. It may be best to take the second first. So, how many scientists need to work to rule for Elsevier to be hurt?
Elsevier publishes journals in all areas of science, and the answer is probably different for each individual field. In biology, one question is how much community shunning it would take to hurt Elsevier’s top journal, Cell. Cell is generally felt to be one of the top three journals in biology (known variously as “the glamor mags”, SNC (for Science, Nature, Cell) or “the rejectionist journals”). I’ve never worked for Cell, though some of my friends have, but I did work for Nature once upon a long-ago time. At the time, Nature was receiving about 120 biology manuscripts a week. My guess is that probably ~30-40 of these papers could have been published in Nature without raising any eyebrows, but because of a strict page budget set by the publisher we were only able to accept about 12 of them, or about 1 in 3. Cell‘s numbers probably aren’t too different, suggesting that even if ~1/2 the scientific community decided not to publish in Elsevier journals, Cell would still be happily able to fill its pages. There would eventually be an effect on quality, but it would take more effort than you think to get to a point where it’s noticeable.
Another question is how much it would take to damage one of Elsevier’s less dominant publications, of which there are hundreds. In mathematics, the community decided that the journal Topology was ridiculously overpriced, and the editorial board for the journal resigned en masse — an apparently fatal blow. Could an effort like this be effective for biology journals?
Perhaps. We need to remember that mathematical journals in general are far less powerful than biological journals. The mathematics community is much smaller, and it’s easier to know for sure who contributed what and how important the contribution was. A preprint on arXiv is just as visible as a paper in a journal (if not more so). The mathematics community decided it could do without Topology, and was able to take coherent action to get rid of it and move on. It wouldn’t surprise me at all if there were Elsevier biology journals that the biology community could do without, but it seems to me that the current boycott effort is too diffuse to be likely to be successful.
If one wishes to send a message to Elsevier that many of their journals are overpriced, perhaps the most successful strategy would be to attempt to copy what happened to Topology for a selected few journals (~10) for which there seems to be little justification. For example, one might choose journals that compete with a perfectly good society journal or open access publication, or both, and that are noticeably overpriced given the quality of the journal. The goal would be to persuade the entire editorial board to resign and spread the word throughout the biological community that nobody should sign up to replace them, and nobody should submit or review papers. If enough of the community agrees that these specific journals are unnecessary and overpriced, there might then be a real chance of killing them. (If not, a new editorial board will sign right up, and many people won’t even notice the change. In a community as large as the biological community, with commensurately diverse opinions, this seems to me to be a real danger.)
I think a strategy like this might have some chance of success, and of delivering a message that will be heard. My guess is that the current boycott — even though it’s generating lots of buzz, and articles in the NYT, and opinion pieces in the Boston Globe comparing it, bizarrely, to the Arab Spring — has little chance of making any difference to Elsevier in biology, though the situation could be very different in mathematics. A targeted approach doesn’t sound as grand as boycotting the whole of Elsevier, and has the further disadvantage that it requires consultation and agreement about which journals are most egregious before anything can be done. But if it works, then one could go on to the next 10 most unnecessary and overpriced journals, until the only journals that are published are those whose prices don’t generate too much outrage.
Which leads me back to the first question: when is it OK for publishers to make a profit on a journal, and how much profit is it OK for them to make? What (if anything) does the community want from publishers, and how much are we willing to pay for it?
Lots of factors go into the answer to this question, and reasonable people will reasonably disagree about how to weigh these factors. One factor is the rising total cost of the journals an average library wants to buy. This is partly caused by inflation and/or rapacious publishers, but also partly caused by a growth in the number of papers being published. In 1996, the US published ~86,000 papers on medicine. In 2010, that number had grown to ~140,000. China’s medical publications jumped from ~2,000 to ~28,000 (data from SciImago); not all of these are in publications the West pays attention to, but some are. You’d expect a roughly 50% increase in medical library budgets just because of the growth in numbers, which puts serious downward pressure on the sustainable cost of any individual publication. To me, this is one of the strongest economic (as opposed to moral) arguments in favor of the “author pays” model of publishing: including the cost of publication in the cost of the research is perhaps the only way to make library budgets predictable. On the other hand, “author pays” shifts costs around in ways that may not be entirely beneficial to academia. For example, under this model many large pharmaceutical companies, who consume a lot of papers but produce relatively few of them, will pay far less than they do now.
Another factor is the internet, and the way it’s changing what we need from publishing. In the old days a publisher offered a couple of obviously important services: distribution (printing, mailing, etc.), and a quality stamp (peer review). Less obvious services were quality control (subeditors to check your spelling and make sure the figures you submitted are actually the ones you meant to submit) and enabling browsing by collecting papers that might be interesting to similar people together. The internet allows one to dispense with the distribution function of publishers, if you’re willing to do without paper copies, which increasingly people are; and peer review is done by your peers anyway, so why pay a publisher for it?
There is an irreducible minimum of quality control that is required to produce papers that are reasonably easy to read (to my eye, many internet-only journals and some print journals are pushing hard against that limit). Someone has to organize that. Someone also has to do the paper-shuffling (e-mail shuffling?) required for peer review to be efficient. Beyond this, what do we want to pay for?
In the end, I think we only want to pay for added value in the editorial process, and this is where the debate gets ugly. On the one hand, there is some sense in which SNC/the glamor mags/the rejectionist journals add value, in that people want to read them more than they want to read other journals on the same topic. On the other hand, this value is added by rejecting about 90% of the papers that are submitted to them, which is the kind of value that many people feel they could do without — at least, with their author’s hat on. This kind of rejection rate isn’t sustainable in an author-pays model, by the way, at least not for a stand-alone publication: you do all the work of evaluating 10 papers, but you only get income from 1 paper (the one you publish). A stable of journals such as the PLoS journals can do better by capturing as much as possible of the benefit of the work that has been done by passing already-reviewed papers from more selective to less selective journals.
Personally, I do value Science, Nature and Cell, and I’d like to see them continue with their current reader-pays model and the motivation of doing their very best for the reader, even if this means that many authors suffer the pangs of rejection. (It would be nice if these rejections could be a bit less brutal, of course.) I feel the same way about a handful of other journals; you probably do too, and your list is probably different from mine. Is it crazy to think about a two-tier model where a very small set of journals (maybe 10% of the total) are reader-focused and paid for by subscriptions, and the rest are focused on giving authors the best possible service and use the author-pays model? All original research would be open-access after a maximum of a year, of course: if a journal is genuinely adding value, it should have little to fear from this.
One of the complications that comes up when you talk about the value added by SNC is that there is real unhappiness about the level of power these journals, and journals in general, have over the careers of individual scientists. Mike Eisen has argued that the power of SNC is overrated, and I would like to believe he’s right, but I don’t. There’s not much question that the perceived quality of the journals you publish in matters to your career, even if some people do well without publishing in glamor mags, and even if Nobel-prizewinning research is not uncommonly rejected by them. During my first week of “flying solo” as an editor at Nature (after a training period during which a more senior editor held my hand), an author called me to complain about my decision to reject a paper and told me, tearfully, that this decision would prevent her from getting tenure. I hope she was wrong. I fear that there’s a chance she was right. If so, shame on the people who were evaluating her. As a Nature editor, I tried hard to do a good job of evaluating papers (especially after that agonizing phone call), but there is no way to do a perfect job. You’re dependent on which referees agree to evaluate the paper, how much time they happen to have to give to the task of evaluating it, how honest they are about their own biases, how open they are to seeing the potential importance of a paper that has surprising findings, and whether they’re in a crabby mood or a good mood. On top of that, the decision is affected by the state of the competition — not just how good the other papers are, but how good the referees for the other papers are. And of course, though I hate to admit it, editors make mistakes too. (Although we try really really hard not to, and sometimes spend sleepless nights worrying about whether we have.)
The community knows, of course, that the process of selecting papers for SNC is imperfect. So why is so much attention paid to the decisions these harried editors have made? The only explanation I can think of is that the other methods for deciding who gets preferment are even worse. Most of the time (with rare but distressing exceptions) the process of deciding which papers get published in SNC is somewhat impartial (relative to the friends or enemies of the author, say within their home department, deciding how significant a given paper might be), reasonably prompt, and offers some measure of how important the work is relative to work in other fields. Not a perfect measure by any means, and especially imperfect because not everyone wants to run the gauntlet of attempting to get their papers in to SNC, even when they have a paper that they know is very important. Still, it seems that we’ve collectively decided that any such measure is better than nothing. The proposals that have been put forward as alternatives — which often center on the “wisdom of crowds” and post-publication review — haven’t yet turned out to be very effective except in isolated cases, and may always be too slow to be useful when a young person is applying for a job or a grant. An effective alternative would need to do better than SNC in terms of quality assessments, be at least as impartial and fair as SNC, scale to the very large number of papers published every year and deliver a conclusion within a year or so. I’m not saying it’s impossible. I’m just saying I haven’t seen it yet.
All of this notwithstanding, the reliance on journal decisions has clearly gone too far. It should not matter this much whether a paper is published in glamor journal A or very respectable journal B. From a journal editor’s perspective, the power the community has ceded to the journals is a scary and burdensome responsibility; my impression is that some of my ex-colleagues have responded to the increasing pressure by trying harder and harder to be perfect (which is, of course, impossible), by adding more referees, taking longer to decide, and generally eroding the features that used to be among SNC’s main virtues: rapid decision-making, rapid publication, and papers that were thought-provoking and interesting whether or not they were right. There was a time when people used to joke “just because it’s in Cell doesn’t mean it’s wrong”. Perhaps I don’t quite want to go back to those days — though bringing back Ben Lewin would be interesting, if nothing else — but I do wish we could have a more thoughtful discussion about the role and importance of journals than “SNC/Elsevier bad, Open Access good”.
If you made it this far — thank you for reading. I did warn you. As Tom Lehrer says, or rather sings:
My tragic tale I won’t prolong
And if you did not enjoy my song
You’ve yourselves to blame if it’s too long
You should never have let me begin.