Pages

Saturday, 30 April 2016

Who's Afraid Of The Big Bad Reviewer ?


Peer review is something I've talked about before from time to time, but apparently I'm not making myself clear. I don't know why, I use plain simple language, and it's not very hard to understand. But for the sake of having a go-to post, let me try and put things as briefly and as clearly as possible.

Peer review is not some forced, artificial method of enforcing dogma. It is an inherent and unavoidable part of the scientific method. It occurs at many different levels, from freewheeling discussions with colleagues, to the classical "some random experts read your paper" technique that is now synonymous with the term "peer review", right up to how other experts react when the findings are made public and/or the paper is published. While it's important to be aware that the journal-based peer review (JBPR) technique we all know and loathe today is a modern invention, it's also important to remember that science has never avoided some form of peer review entirely.

Skeptical inquiry demands that all ideas be subject to relentless attack, with a deliberate attempt to falsify them. The reasons we do this are really quite simple : we want to establish the truth, be that for old (apparently secure) ideas and new, novel ones. If an idea stands up to at least one expert trying to disprove it, it's probably worth exploring further en masse. If it can't, it almost certainly isn't. JBPR is a way of restricting access to potentially blind alleys before we get lost in them. In that sense, it is a fundamental part of the scientific process, not some forced product of ivory-tower academia.

Mind you I'd quite like to live in an ivory tower as long as it didn't hurt any elephants.
JBRP varies from journal to journal, but essentially it works like this. An author writes a paper and submits it for publication in a journal. The journal chooses at least one or two other scientists (usually recognized experts in the particular subject area) who decide if the paper be accepted, rejected, or re-reviewed after modifications. If the paper is rejected or modifications are requested, the author can argue their case both with the reviewers and/or the journal editor, who provides oversight to the process. Normally the editor is known to both the author and the reviewers, but the author won't normally know who the reviewers are. Ultimately the author can request another referee if the editor agrees, or even submit it to another journal.

Different journals have different policies, but the role of JBPR is* not necessarily to establish whether a result is either novel or interesting - a result which agrees with an existing finding is still valuable, albeit usually less interesting if it fits established models. Nor does a journal entry absolutely have to contain elaborate interpretation : it's entirely accepted, normal practise to publish papers which are nothing but catalogues of measurements. Sometimes that's literally all there is to it. Really. Honestly. I mean it, dammit.

* Or at least should.

Contrary to unpopular belief, it's fine to simply report results even if they fly in the face of accepted theory. Provided, that is, that you clearly explain how the experiment was done, how the measurements were taken, and don't go overboard with trying to explain the results. And of course the methods you use have to be rigorous : normally, saying, "we picked the data we liked best" (or reporting results which aren't statistically significant) will ensure a rejection letter.

If you're not a fan of JBPR, I implore you to think for a moment. What, exactly, is so unreasonable about asking someone to convince another expert that they have a publishable result if that doesn't even require any interpretation ?

JBRP is not supposed to be a method of proof or disproof. Absolute proof is very rare anyway, but widespread acceptance, which is much more common, almost never happens with the first publication of a result. For that to happen takes time - usually lots of time - for others to verify the findings. Alas this very simple guideline of waiting to see whether the wider community can confirm or deny the initial result is something which is almost entirely lost on the media, who think results are ludicrously black and white... but I digress.

They're also often very stupid.
Likewise, when a paper or proposal is rejected, that does not mean the result is disproven. It simply means it isn't good enough for a paper yet. In no way does that stop you from using other means of communication to the scientific world : conferences, proceedings, arXiv, social media, press releases, whatever. But the chances are that if you couldn't persuade one anonymous expert that you had something worth investigating, you should either abandon your research (sometimes things are just wrong, deal with it) or get better data before you try again.

You might legitimately wonder, why, if peer review doesn't actually disprove anything, scientists cry out for it like a flock of hungry seagulls who have just spotted a bunch of tourists eating bags of chips. The reasons are really very simple. Science is often an incredibly specialised process, and everyone makes mistakes - the reviewer is there both to criticize and to help. At least they are if they're doing their job properly.

Would that Beiber would be eaten by seagulls instead of this nice lady.
If you can convince someone who intimately understands the particular work you did, you're on much surer footing. You've reached a minimum level 0 standard worthy of further investigations : one of the few other people in the world who fully understands what you're doing is convinced you're not a crackpot, allowing others (who may not be so specialised) to have some (but by no means complete) confidence in what you've done. If you can't manage this, you're on very thin ice indeed, along with UFO believers and fans of Justin Beiber, probably. Remember, all you need to do is state your results and make it clear when you're speculating. You don't have to solve the entire mystery.

To be useful, JBPR has to be skeptical, as opposed to denial. Where a paper does present interpretation, attacking weak points is not supposed to mean ripping it to shreds : i.e. the authors should probably say, "we think this is more likely" rather than, "we now know what the answer is". The reviewer should huff and puff and maybe try a chisel, but they aren't supposed to dowse the thing in petrol and throw it to the piranhas - you can find faults with pretty much anything if you really want to.  The reviewer's job is only to decide if the article is worth drawing the attention of the wider community or not. It's not exactly verification or communication, just, "they haven't done anything obviously wrong, here, you take a look at it."

Of course, only a brain-dead gibbon would pretend that this process is perfect. A perfectly objective system run by inherently subjective creatures is fundamentally impossible. One guard against the inevitable bias of the reviewers is their anonymity (which they can discard if they so wish). Thus the reviewer's reputation is in no danger if they accept a publication that contravenes existing ideologies. Obviously that doesn't mean their own biases don't get in the way of being objective, but it greatly reduces the danger of a false consensus. Hence this is one area where transparency is undesirable.

EDIT : It's also worth noting that the journals generally don't pay the reviewers anything, it's just an expected part of any researcher's job. As well as ensuring the referee's are free to speak their minds - a junior postdoc can refute a distinguished professor - anonymity means there's no glory to be won as a reviewer. Refereeing is also an extremely tedious chore for most people that takes weeks of their time they could be spending on their own projects, so the direct tangible rewards of the process are essential nil. Really, what more can you ask of the system ?

Not all reviewers are created equal. Some are pedantically anal idiotic twerps. Others are paragons of virtue and wisdom. Just like any other group of people, really.
That said, there's one aspect of the process I think would benefit from transparency immensely : the exchanges between the authors and the reviewers. This might have been technically difficult not so long ago, because paper costs money, but nowadays no-one reads the paper journals anyway. It would be easy enough to publish everything online. That way the review process itself could be reviewed, which would help everyone understand what the process is really about, not what some people like to think it's about (who've usually never experienced it for themselves).

So no, JBPR isn't perfect, and it can't be. Is it better than not doing it at all ? Yes. The system includes many safeguards against idiotic referees - if you fail to convince two different reviewers and the journal editor that you even just measured something correctly, then the unfortunate truth is that you're probably just wrong. And there's absolutely nothing, err, wrong with that, getting things wrong is fundamental to the scientific method. But it's just not worth publishing fundamentally incorrect data if you can avoid it.

A very strange comment was raised that replication matters more than review. I suppose this might seem sensible if you've never heard of systemic bias, but... no. A thousand times, no, literally ! If you fail to convince an even larger number of reviewers of the validity of your result, the evidence against you has got stronger, not weaker. The only way that would work is if there's a mass bias or widespread incompetence among experts, which is frankly just silly. Remember, there are far more idiots than experts, so it is entirely possible and plausible to get large numbers of people producing stupid results. And I repeat : all you have to do is report your result. You don't have to explain it. You just have to say, "we measured this" with some rigour (i.e. repetition, statistical significance, etc.). That's all. This is not an unreasonable request for a level 0 requirement for publication.

If you insist on finding faults with journal-based science, here's one that's both real and serious : writing style. That's another topic, but in brief, it's god awful. Certain authors seem to take a perverse delight in making their result as obfuscated as possible. It's the old axiom, "if you can't convince, then confuse" writ large. It's bad for science and bad for public communication. Refusing to allow contractions (e.g. isn't, don't, can't, etc.) or insisting on using "per cent" instead of % is just bloody stupid. But that's a rant for another day, or possibly not because the Atlantic article linked is pretty comprehensive.


So, that's it. Journal-based peer review is not a big scary monster hell-bent on enforcing dogma, nor is it any kind of authority with a monopoly on truth. It's just a recognized minimum standard of quality. Where it goes beyond that - and inevitably sometimes it does - it's straying into dangerous territory. You may well argue for particular flaws in particular journals or with particular reviewers. But there's nothing remotely wrong with the method itself. You simply cannot do science without skeptical inquiry - and absolutely no-one is competent or trustworthy enough to be allowed a free hand. Get someone else to have a stab at it, and if it doesn't bleed to death on the first attempt, let everyone else have a go. That's all there is to it.

1 comment:

  1. Thanks for sharing, nice post! Post really provice useful information!

    An Thái Sơn với website anthaison.vn chuyên sản phẩm máy đưa võng hay máy đưa võng tự động tốt cho bé là địa chỉ bán máy đưa võng giá rẻ tại TP.HCM và giúp bạn tìm máy đưa võng loại nào tốt hiện nay.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.