Peer review is something I've talked about before from time to time, but apparently I'm not making myself clear. I don't know why, I use plain simple language, and it's not very hard to understand. But for the sake of having a go-to post, let me try and put things as briefly and as clearly as possible.
Peer review is not some forced, artificial method of enforcing dogma. It is an inherent and unavoidable part of the scientific method. It occurs at many different levels, from freewheeling discussions with colleagues, to the classical "some random experts read your paper" technique that is now synonymous with the term "peer review", right up to how other experts react when the findings are made public and/or the paper is published. While it's important to be aware that the journal-based peer review (JBPR) technique we all know and loathe today is a modern invention, it's also important to remember that science has never avoided some form of peer review entirely.
Skeptical inquiry demands that all ideas be subject to relentless attack, with a deliberate attempt to falsify them. The reasons we do this are really quite simple : we want to establish the truth, be that for old (apparently secure) ideas and new, novel ones. If an idea stands up to at least one expert trying to disprove it, it's probably worth exploring further en masse. If it can't, it almost certainly isn't. JBPR is a way of restricting access to potentially blind alleys before we get lost in them. In that sense, it is a fundamental part of the scientific process, not some forced product of ivory-tower academia.
Mind you I'd quite like to live in an ivory tower as long as it didn't hurt any elephants. |
Different journals have different policies, but the role of JBPR is* not necessarily to establish whether a result is either novel or interesting - a result which agrees with an existing finding is still valuable, albeit usually less interesting if it fits established models. Nor does a journal entry absolutely have to contain elaborate interpretation : it's entirely accepted, normal practise to publish papers which are nothing but catalogues of measurements. Sometimes that's literally all there is to it. Really. Honestly. I mean it, dammit.
* Or at least should.
Contrary to unpopular belief, it's fine to simply report results even if they fly in the face of accepted theory. Provided, that is, that you clearly explain how the experiment was done, how the measurements were taken, and don't go overboard with trying to explain the results. And of course the methods you use have to be rigorous : normally, saying, "we picked the data we liked best" (or reporting results which aren't statistically significant) will ensure a rejection letter.
If you're not a fan of JBPR, I implore you to think for a moment. What, exactly, is so unreasonable about asking someone to convince another expert that they have a publishable result if that doesn't even require any interpretation ?
JBRP is not supposed to be a method of proof or disproof. Absolute proof is very rare anyway, but widespread acceptance, which is much more common, almost never happens with the first publication of a result. For that to happen takes time - usually lots of time - for others to verify the findings. Alas this very simple guideline of waiting to see whether the wider community can confirm or deny the initial result is something which is almost entirely lost on the media, who think results are ludicrously black and white... but I digress.
They're also often very stupid. |
You might legitimately wonder, why, if peer review doesn't actually disprove anything, scientists cry out for it like a flock of hungry seagulls who have just spotted a bunch of tourists eating bags of chips. The reasons are really very simple. Science is often an incredibly specialised process, and everyone makes mistakes - the reviewer is there both to criticize and to help. At least they are if they're doing their job properly.
Would that Beiber would be eaten by seagulls instead of this nice lady. |
To be useful, JBPR has to be skeptical, as opposed to denial. Where a paper does present interpretation, attacking weak points is not supposed to mean ripping it to shreds : i.e. the authors should probably say, "we think this is more likely" rather than, "we now know what the answer is". The reviewer should huff and puff and maybe try a chisel, but they aren't supposed to dowse the thing in petrol and throw it to the piranhas - you can find faults with pretty much anything if you really want to. The reviewer's job is only to decide if the article is worth drawing the attention of the wider community or not. It's not exactly verification or communication, just, "they haven't done anything obviously wrong, here, you take a look at it."
Of course, only a brain-dead gibbon would pretend that this process is perfect. A perfectly objective system run by inherently subjective creatures is fundamentally impossible. One guard against the inevitable bias of the reviewers is their anonymity (which they can discard if they so wish). Thus the reviewer's reputation is in no danger if they accept a publication that contravenes existing ideologies. Obviously that doesn't mean their own biases don't get in the way of being objective, but it greatly reduces the danger of a false consensus. Hence this is one area where transparency is undesirable.
EDIT : It's also worth noting that the journals generally don't pay the reviewers anything, it's just an expected part of any researcher's job. As well as ensuring the referee's are free to speak their minds - a junior postdoc can refute a distinguished professor - anonymity means there's no glory to be won as a reviewer. Refereeing is also an extremely tedious chore for most people that takes weeks of their time they could be spending on their own projects, so the direct tangible rewards of the process are essential nil. Really, what more can you ask of the system ?
Not all reviewers are created equal. Some are pedantically anal idiotic twerps. Others are paragons of virtue and wisdom. Just like any other group of people, really. |
So no, JBPR isn't perfect, and it can't be. Is it better than not doing it at all ? Yes. The system includes many safeguards against idiotic referees - if you fail to convince two different reviewers and the journal editor that you even just measured something correctly, then the unfortunate truth is that you're probably just wrong. And there's absolutely nothing, err, wrong with that, getting things wrong is fundamental to the scientific method. But it's just not worth publishing fundamentally incorrect data if you can avoid it.
A very strange comment was raised that replication matters more than review. I suppose this might seem sensible if you've never heard of systemic bias, but... no. A thousand times, no, literally ! If you fail to convince an even larger number of reviewers of the validity of your result, the evidence against you has got stronger, not weaker. The only way that would work is if there's a mass bias or widespread incompetence among experts, which is frankly just silly. Remember, there are far more idiots than experts, so it is entirely possible and plausible to get large numbers of people producing stupid results. And I repeat : all you have to do is report your result. You don't have to explain it. You just have to say, "we measured this" with some rigour (i.e. repetition, statistical significance, etc.). That's all. This is not an unreasonable request for a level 0 requirement for publication.
If you insist on finding faults with journal-based science, here's one that's both real and serious : writing style. That's another topic, but in brief, it's god awful. Certain authors seem to take a perverse delight in making their result as obfuscated as possible. It's the old axiom, "if you can't convince, then confuse" writ large. It's bad for science and bad for public communication. Refusing to allow contractions (e.g. isn't, don't, can't, etc.) or insisting on using "per cent" instead of % is just bloody stupid. But that's a rant for another day, or possibly not because the Atlantic article linked is pretty comprehensive.
So, that's it. Journal-based peer review is not a big scary monster hell-bent on enforcing dogma, nor is it any kind of authority with a monopoly on truth. It's just a recognized minimum standard of quality. Where it goes beyond that - and inevitably sometimes it does - it's straying into dangerous territory. You may well argue for particular flaws in particular journals or with particular reviewers. But there's nothing remotely wrong with the method itself. You simply cannot do science without skeptical inquiry - and absolutely no-one is competent or trustworthy enough to be allowed a free hand. Get someone else to have a stab at it, and if it doesn't bleed to death on the first attempt, let everyone else have a go. That's all there is to it.
Thanks for sharing, nice post! Post really provice useful information!
ReplyDeleteAn Thái Sơn với website anthaison.vn chuyên sản phẩm máy đưa võng hay máy đưa võng tự động tốt cho bé là địa chỉ bán máy đưa võng giá rẻ tại TP.HCM và giúp bạn tìm máy đưa võng loại nào tốt hiện nay.