fbpx

The Wisdom of Crowd Review

Almost immediately after finishing ‘Time to publish then filter?’ – a post that highlighted a recent editorial in the BMJ outlining the need for an effective system of post-publication peer review — I came across this in the Annals of Emergency Medicine:

This paper dovetails quite nicely with the editorial by Schriger and Altman. It is well worth reading if you are interested in the future of how clinical practice relates to the scientific process. Indeed, Millard quotes Schriger in stating that “clinical practice is largely “disarticulated from science,” and that “much clinical literature is published for reasons other than advancing knowledge”.

First, Millard reviews the peer review process:

The consensus emerging from “a handful of decent studies, less than half a dozen” is that peer review does help improve articles, but not enormously, and that its gatekeeping effect is overrated.

He notes that peer reviewers routinely miss errors in papers:

Even glaring errors in studies frequently slip through. Annals is one of several journals that have tested their reviewers by circulating fictitious articles with deliberately inserted flaws… finding that “basically, peer reviewers did dreadfully

He also emphasises that the gate-keeping function of peer-review is over-rated. Instead, papers are simply shunted from more popuar journals to less popular journals, eventually finding their way into print regardless.

One trouble is that despite this system, anyone who reads journals widely and critically is forced to realize that there are scarcely any bars to eventual publication. There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for an article to end up in print.

Millard then discusses the possibility of a ‘publish then filter’ strategy, as championed by Richard Smith, former Editor of the BMJ. Previous experiments with open review by Nature and the MJA had mixed success. It will be interesting to see how the planned BMJ journal BMJ Open fares. This journal will be open access, both to read and for post-publication review, with publications funded by the authors and undergoing traditional peer review first.

But an open web-based model is hampered by the problem of selecting reviewers:

No one involved in the peer review debates is seriously proposing moving all the functions of medical editorial review wholly into public space, ie, to the free-fire zone of the Web’s comments sections or the original unmoderated Wikipedia, open not only to well-informed laypeople but to trolls, flamers, Astroturfers, the malicious, the misidentified, and the immature of all ages. Peer review migrating to the online environment needs to set a workable definition of a peer community.

And should the reviewers be blinded? In terms of effects on the outcome, it seems that attempting to blind authors and reviewers from one another could be abandoned, especially in highly specialised areas where there are only a small number of experts:

“blinding failed in 32% of cases, particularly when authors were well known, and had little effect on review quality… only fully open or fully closed review is justifiable and finding that the latter is infeasible… open systems alone are logical.”

But outcome may not be everything, if authors still do not feel that the process is fair:

“It does matter if it’s fair or not, but it also matters whether the author thinks it’s fair. And if you’re not blinded, a certain number of your authors are not going to think that that’s really fair.”

Perhaps, again, the sense that anonymity is essential for fairness is cultural problem shared by medical/ scientific researchers. Another problem with the current peer review process is elucidated by David Schriger:

“Instead of judging the methodological strengths and internal logic of an article, too many reviewers end up assessing results as if they were truth claims.”

To counter, this David Schriger offers his view of an optimised peer review system:

“Truth… is better determined by the wider scientific community than by a small number of reviewers. An optimized review system… might gauge consensibility criteria such as study design, statistical power, complete data presentation, and clinical implications as aspects of prepublication review; having different reviewers examine methods and content, as Annals, NEJM, and certain other journals do, strengthens that assessment. Once peers deem an article consensible (that is, publishable), “it’s the job of the community to vet it.”

Where will this all lead? Who knows. I sense that momentum is building, and that the current peer review model will eventually be revamped, reinvented, or simply scrapped. the future looks interesting… I think Web 2.0 will play a role in facilitating it.

Chris is an Intensivist and ECMO specialist at the Alfred ICU in Melbourne. He is also a Clinical Adjunct Associate Professor at Monash University. He is a co-founder of the Australia and New Zealand Clinician Educator Network (ANZCEN) and is the Lead for the ANZCEN Clinician Educator Incubator programme. He is on the Board of Directors for the Intensive Care Foundation and is a First Part Examiner for the College of Intensive Care Medicine. He is an internationally recognised Clinician Educator with a passion for helping clinicians learn and for improving the clinical performance of individuals and collectives.

After finishing his medical degree at the University of Auckland, he continued post-graduate training in New Zealand as well as Australia’s Northern Territory, Perth and Melbourne. He has completed fellowship training in both intensive care medicine and emergency medicine, as well as post-graduate training in biochemistry, clinical toxicology, clinical epidemiology, and health professional education.

He is actively involved in in using translational simulation to improve patient care and the design of processes and systems at Alfred Health. He coordinates the Alfred ICU’s education and simulation programmes and runs the unit’s education website, INTENSIVE.  He created the ‘Critically Ill Airway’ course and teaches on numerous courses around the world. He is one of the founders of the FOAM movement (Free Open-Access Medical education) and is co-creator of litfl.com, the RAGE podcast, the Resuscitology course, and the SMACC conference.

His one great achievement is being the father of three amazing children.

On Twitter, he is @precordialthump.

| INTENSIVE | RAGE | Resuscitology | SMACC

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.