Press "Enter" to skip to content

Peer Review and Open Access in the Headlines Again

Peer Review and Open Access on the Radio

The internal systems used by research and academia are not often the subject of discussion by members of the public. They are, after all, somewhat tedious and removed from the lives of a vast majority of people. Because of this, when I was listening to NPR on Friday morning and heard the words "peer review" and "open access", I immediately turned up the volume to listen in.

NPR was interviewing John Bohannon regarding a study he conducted wherein he sent an article that was written to be deliberately bad to several hundred open access journals. Bohannon wrote about this study for Science as "Who's Afraid of Peer Review?" In the end, a majority of the journals he submitted to ultimately accepted the paper despite the fundamental flaws (that were ostensibly obvious to anyone with some modicum of training in the field) it contained. NPR ran the story as "Some Online Journals Will Publish Fake Science, For A Fee" and describes the study as a "sting". Bohannon is quoted as saying that the sting revealed "the contours of an emerging Wild West in academic publishing."

An Overview of Peer Review

Let's take a step back and talk about what peer review is. Journal articles are considered to be the gold standard for disseminating research in many fields. The work presented in journal articles is ostensibly mature enough to make a meaningful impact upon the field and the work of other researchers. Researchers send their manuscripts to journals, and journals need to decide what to publish as articles. The way journals decide this is through peer review.

When a manuscript is sent to a journal, an editor (briefly) reviews it. If it seems like it is entirely unfit for publication in that journal, it is rejected outright. This type of "desk rejection" could be because it is of such obvious poor quality or that it is not a good fit for the journal (e.g. sending a paper on the latest Alzheimer's treatments to a journal specializing in antebellum American history). If it seems like the paper may be a good fit, the editor sends it off to peer reviewers. Peer reviewers are member of the field that have the requisite knowledge to give informed criticism on the manuscript and decide if it is fit for publication. The editor uses their connections to find reviewers for the paper, and then uses their feedback to render a decision about whether to publish the paper or not.

(There are four basic decisions that the editor can make: Accept, Accept with Revisions, Revise and Resubmit, and Reject. For a first time submission, Revise and Resubmit (R&R) is by far the most likely outcome for a quality manuscript. Great work may get an Accept with Revisions, but even an R&R is still likely for great work.)

The number of reviewers per paper isn't fixed, but a typical number may be 2-3 people. These reviewers are tasked with providing feedback to both the editor and the authors. Even a rejection can be helpful for authors if the feedback provided by the reviewers is good. In the end, the purpose of peer-review is to improve the specific manuscript/study and the field as a whole by ensuring that only high-quality work is published. Work that isn't good enough is revised until it is.

Another quirk about peer review is that everyone involved is anonymous to the fullest extent possible. The reviewers receive anonymized copies of the manuscript, and the authors receive comments from the reviewers without ever knowing their names. The idea behind this is that reviewers don't have to be afraid of hurting feelings or spoiling relationships if they are critical of someone's work.

One last thing to note: by and large, neither the editors nor the peer reviewers make any money doing this work (it is all considered to be "Service" to the profession). I say by and large because maybe somewhere out there in the world an editor is making some money doing this, but I've never heard of it. But, publishing takes time and money (even if it is electronic only), and somebody has to pay for this. In practice, either the readers (often through institutional subscriptions), the authors (in the form of a publication fee), or both pay the price for publication.

An Overview of Open Access

There has been a movement lately (since the dawn of a ubiquitous Internet) toward Open Access (OA), meaning that journal articles published are made freely available to anyone. A commonly cited reason for this shift is that much research is tax-payer funded, and charging tax-payers money to read the research they paid for seems unethical at best. Of course, there are many proponents of an even broader 'all-research-should-be-free' philosophy, but the former is at least commonly agreed upon by researchers. OA journals generally charge the author a publication fee (often several hundred dollars).

The "traditional publishers" have been slower to respond to this call (why change the status quo when it favors you?), and in response many new OA journals have been made. Unfortunately, while many OA journals are highly-respected and comparable in quality to their traditional counterparts in every way except for their access policies, there has been a flood of low-quality "publications" that claim to be OA journals but are primarily interested in collecting publication fees. Because their primary interest is in making money, they want to publish as much as possible as quickly as possible, and subjecting each article to quality peer review doesn't fit this paradigm. (In fact, one doubts they could even find enough qualified reviewers.)

To help sort the wheat from the chaff, a librarian named Jeffery Beall has created a list of journals and publishers suspected of being "predatory". While Beall's list is not without its flaws, it is an admirable undertaking that serves a valuable purpose in the current climate of OA journals. Furthermore, it has helped to raise awareness of the problem of chaff among the wheat in both academia and the public. (I previously wrote about my experience dealing with one of the publishers on his list.)

Back to Bohannon and the "Sting"

Now back to Bohannon and the "sting" he conducted and published in Science (a traditional journal). Bohannon submitted the bogus article to hundreds of journals selected from two lists: the Directory of Open Access Journals (DOAJ) and Beall's list. While the DOAJ is a reasonable list to select from as its aim is to provide what its name suggests, selecting from Beall's list seems unfair. The two lists overlap, and including journals that are on Beall's list but don't meet the DOAJ's standards seems to be deliberately including journals known to the community to most likely be predatory or fake. Furthermore, no traditional journals were included in the "sting".

Responses from the OA Community

I've read three good responses to Bohannon's sting that specifically address this point from: the Open Access Scholarly Publishers Association (OASPA), the DOAJ, and Michael Eisen (the co-founder or the Public Library of Science (PLOS)). Both the OASPA and Eisen challenge the results of the Bohannon's study because of its failure to include traditional journals, and some OA advocates claim that similar results would be expected in the world of traditional journals. The OASPA response references two studies that compare OA journals to traditional journals, and Eisen brings up the high-profile case of Science itself publishing a paper that was later found to have substantial flaws that should have been discovered by the peer review process. (The three above-linked responses are thorough - check them out.)

Implications for Reading and Citing Research

The media - and people in general, I suppose - like having a nice conclusion. Perhaps they were searching for something along the lines of "Open access is a nice idea, but traditional publishers are still needed". I reject that such a neat conclusion is possible (and strongly reject that hypothetical conclusion in particular). In the end, peer review isn't perfect. When reading a journal article - even if it has been subjected to peer review - one must still be actively engaging with it to evaluate its strengths and weaknesses. If the journal article is strong in the areas that are relevant to you, cite away. If it has weaknesses, use caution and judgement. Viewing journal articles as less-than-gospel is certainly more work for the reader, but is much closer to the way science actually works. If you as a researcher want to cite an article, you should feel confident doing so based on your reading of the article while not relying entirely on peer reviewers whom you do not know.

Furthermore, Bohannon's criticisms are neither new nor unique to OA journals. Criticisms about the quality of the peer review process in general are not new with some going so far as to submit randomly generated papers for publication... and subsequently having them accepted. (See SCIGen and MathGen for some examples.)

Possible Improvements to the System

Some of the weaknesses inherent in the current peer review system might be eliminated by eschewing anonymous peer reviewers. I've heard that non-anonymous peer review is becoming accepted in some areas of qualitative social science research. It is not a new idea that this sort of attributable peer review could be extended to other sciences.

I would even propose going further than just signing one's name to one's reviews. I think that a journal that published the reviewers' comments alongside the published article and continually accepted feedback even after publication might be a worthwhile endeavor. I'm not sure if anyone's had that specific idea before, but I would like to draw on the collective wisdom of all who have read each article, if possible. It might be jarring at first to see a myriad of comments and criticisms along with each article, but these comments and criticisms would help define the appropriate role and use of the article in the field. Of course, particularly in social science research, one person's criticism may be entirely irrelevant from another's paradigm, so sifting through these comments might be messy. The technology to do this exists, at least, and it might be worth trying.

Final Thoughts

Open access is not employed equally in all fields. In the field of Statistics Education, the two primary outlets for research - the Statistics Education Research Journal (SERJ) and the Journal of Statistics Education (JSE) - are both open access. Both SERJ and JSE do peer review and seem to maintain a good level of quality. OA journals may not have been the panacea that some expected, but they are here to stay and are already publishing high quality research in essentially every field. And while OA journals are not perfect, neither are traditional journals.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *