The past few years have been a period of significant turmoil — some of it quite constructive — for publishers and editors of science journals. Controversies regarding potential conflicts of interest have led some journals to reexamine their rules for revealing the financial relationships of published researchers. Competition from free online “open access” journals, such as the six new journals published by the nonprofit Public Library of Science, has led several mainstream print journals to beef up their online offerings. And some notable journals concerned about fraudulent research have reportedly improved the screening of manuscripts under consideration, in an attempt to catch those who would misrepresent or “beautify” their data. (“Let’s celebrate real data,” the editors of Nature Cell Biology recently wrote, “wrinkles, warts, and all.”)
The most interesting change stirring in the world of science and medical journals — and the change likely to have the most far-reaching impact — relates to peer review. Also known as “refereeing,” the peer review process is used by journal editors to aid in deciding which papers are worth publishing. Some researchers may assume that peer review is a nuisance that scientists have always had to tolerate in order to be published. In reality, peer review is a fairly recent innovation, not widespread until the middle of the twentieth century. In the nineteenth century, many science journals were commandingly led by what Ohio State University science historian John C. Burnham dubbed “crusading and colorful editors,” who made their publications “personal mouthpieces” for their individual views. There were often more journals than scientific and medical papers to publish; the last thing needed was a process for weeding out articles.
In time, the specialization of science precluded editors from being qualified to evaluate all the submissions they received. About a century ago, Burnham notes, science journals began to direct papers to distinguished experts who would serve on affiliated editorial boards. Eventually — especially following the post-World War II research boom — the deluge of manuscripts and their increasing specialization made it difficult for even an editorial board of a dozen or so experts to handle the load. The peer review system developed to meet this need. Journal editors began to seek out experts capable of commenting on manuscripts — not only researchers in the same general field, but researchers familiar with the specific techniques and even laboratory materials described in the papers under consideration. The transition from the editorial board model to the peer review model was eased by technological advances, like the Xerox copier in 1959, that reduced the hassles of sending manuscripts to experts scattered around the globe. There remained holdouts for a while — as Burnham notes, the Tennessee Medical Association Journal operated without peer review under one strong editor until 1971 — but all major scientific and medical journals have relied on peer review for decades.
In recent times, the term “peer reviewed” has come to serve as shorthand for “quality.” To say that an article appeared in a peer-reviewed scientific journal is to claim a kind of professional approbation; to say that a study hasn’t been peer reviewed is tantamount to calling it disreputable. Up to a point, this is reasonable. Reviewers and editors serve as gatekeepers in scientific publishing; they eliminate the most uninteresting or least worthy articles, saving the research community time and money.
But peer review is not simply synonymous with quality. Many landmark scientific papers (like that of Watson and Crick, published just five decades ago) were never subjected to peer review, and as David Shatz has pointed out, “many heavily cited papers, including some describing work which won a Nobel Prize, were originally rejected by peer review.” Shatz, a Yeshiva University philosophy professor, outlines some of the charges made against the referee process in his 2004 book Peer Review: A Critical Inquiry. In a word, reviewers are often not really “conversant with the published literature”; they are “biased toward papers that affirm their prior convictions”; and they “are biased against innovation and/or are poor judges of quality.” Reviewers also seem biased in favor of authors from prestigious institutions. Shatz describes a study in which “papers that had been published in journals by authors from prestigious institutions were retyped and resubmitted with a non-prestigious affiliation indicated for the author. Not only did referees mostly fail to recognize these previously published papers in their field, they recommended rejection.”
The Cochrane Collaboration, an international healthcare analysis group based in the U.K., published a report in 2003 concluding that there is “little empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs.” The Royal Society has also studied the effects of peer review. As the chairman of the investigating committee told a British newspaper in 2003, “We are all aware that some referees’ reports are not worth the paper they are written on. It’s also hard for a journal editor when reports come back that are contradictory, and it’s often down to a question of a value judgment whether something is published or not.” He also pointed out that peer review has been criticized for being used by the scientific establishment “to prevent unorthodox ideas, methods, and views, regardless of their merit, from being made public” and for its secretiveness and anonymity. Some journals have started printing the names of each article’s referees; the British Medical Journal (BMJ), for instance, decided to discontinue anonymous peer reviews in 1999. The new system, called “open peer review,” allows for more transparency and accountability but may discourage junior scientists from critically reviewing the work of more senior researchers for fear of reprisal.
Perhaps the most powerful criticism of peer review is that it fails to achieve its core objective: quality control. Shatz describes a study in which “investigators deliberately inserted errors into a manuscript, and referees did a poor job of detecting them.” And critics of peer review need look no further than recent high-profile papers that turned out to be hoaxes — like the massive case of scientific fraud perpetrated by South Korean stem cell researcher Hwang Woo Suk in Science. Of course, no one should expect a perfect system, or condemn peer review as a whole for its occasional failures. Back in 2003, the editors of Nature Immunology lamented “the expectation in the popular press that peer review is a process by which fraudulent data is detected before publication.” Peer reviewers, they argued, cannot be expected “to ferret out cleverly concealed, deliberate deceptions.” But even granting this truth, the question remains: Is peer review the best process for promoting the highest quality science?
Beyond the many criticisms of peer review — some new, some perennial — two recent developments are especially intriguing. First, the open-access journals, which already make use of the Internet as their basic means of publication, are now finding ways to incorporate many so-called “Web 2.0” tools for collaboration, comment, and criticism. So, for example, a forthcoming multidisciplinary academic journal called Philica seeks to institute a peer-review process that is “transparent” (meaning that “reviews can be seen publicly”) and “dynamic” (“because opinions can change over time, and this is reflected in the review process”). Instead of following the print-journal model of publishing articles after peer-review, Philica will publish articles before peer-review. “When somebody reviews your article, the impact of that review depends on the reviewer’s own reviews,” the Philica website says. “This means that the opinion of somebody whose work is highly regarded carries more weight than the opinion of somebody whose work is rated poorly. A person’s standing, and so their impact on other people’s ratings, changes constantly as part of the dynamic Philica world. Ideas and opinions change all the time — Philica lets us see this. This really is publishing like never before.”
Another new open-access journal is likely to have an even bigger impact on the scientific community. The Public Library of Science will be launching its seventh journal in November 2006, called PLoS ONE. In an implicit challenge to Nature and Science, PLoS ONE will be the first of the group’s journals to publish articles in all areas of science and medicine. Articles published in the new journal will undergo peer review, but some of the standard criteria that older journals use to screen out articles — like “degree of advance” or “interest to a general reader” — won’t be used by PLoS ONE reviewers; all papers of scientific merit will be posted to the public record. Only weeks (not months) will go by before a submitted article is published, since instead of coming out periodically issue-by-issue, PLoS ONE will be in a state of continuous publication. A more public review process will continue after publication, as readers will be able to rate, annotate, and comment on papers, and authors can respond to their comments. The original paper will remain as such, but comments, revisions, and updates will orbit nearby, an electronic Talmud on every article of significance.
It is easy to believe, in reading the plans for this new publication, that it truly represents “the first step” in a wonderful “revolution” (as the Public Library of Science puts it). But it is worth remembering that gates and gatekeepers serve the important function of keeping out barbarians; it would be regrettable if the world of science journals came to suffer the sort of “trolling” and “flaming” so common today in comments on blogs and Internet discussion boards. It would be unfortunate if the deliberate, measured character of scientific research and discourse were lost to a culture of speed, hype, and quick-hit comments.
The second major development is that traditional peer review is under reconsideration even within the heart of establishment scientific publishing. This summer, the journal Nature is experimenting with a similar system of public review. Although the journal’s articles will continue to go through the standard closed peer review process, a public version of peer review will be working in parallel: certain submissions will be posted online to solicit reader feedback, in hopes that experts will voluntarily review the articles. If this experiment shows that posted “pre-prints” receive enough attention online, Nature will apparently consider altering its traditional peer review practices. The journal is meanwhile sponsoring an ongoing online debate about peer review, with articles about the pros, cons, and future of refereeing.
What to make of all this? Peer review will surely not disappear overnight, but there are clear indications that it will evolve in the next few years as the established journals come to terms with Internet publication. Already in some fields of science, like physics and astronomy, the print journals have receded in importance due to online repositories like arXiv (pronounced “archive”) that disseminate studies without the hassle of peer review. The last few decades of peer review may someday be remembered as a peculiar period in the history of science, an aberration produced by an explosion of researcher productivity and the constraints of print publication, eventually superseded by a fuller, nonstop scientific conversation. But we should not declare a revolution too soon or dismiss too easily the significant achievements of the current system, even as we acknowledge its many shortcomings and prepare to take full advantage of the new technologies of publishing.
During Covid, The New Atlantis has offered an independent alternative. In this unsettled moment, we need your help to continue.