But this post was different. Nemesis, some claimed, was "junk science" or worse, "drivel", and the post should be removed. I could not let that pass. Richard Muller's Nemesis : The Death Star was one of my favourite popular science books from my early teenage years. I had quite clear memories of it being exceptionally well-written, ruthlessly honest, and giving a detailed and interesting account of what it's really like to actually do science on a day-to-day basis. Most outreach books tell you only about the theories, whereas Nemesis also told you about the other aspects : the workload involved of getting a result, how other scientists and the media reacted, how people changed their minds or became dogmatic - in short, what the scientific method is really like, not what it's supposed to be.
So I ordered my copy for some ridiculously small amount of currency, and three weeks later it finally arrived. Three days later, I'd finished reading it, having found it quite hard to do anything else.
Nemesis is a real gem. The fact that it's about a star that disturbs comets and is likely wrong is absolutely irrelevant - it's a tour de force in intellectual honesty and a brilliantly clear description of what front-line research is really like. Still, for the sake of it, I suppose I should explain the basic premise.
The book is actually about two ideas. The first is the now well-established idea that the impact of a comet or an asteroid killed the dinosaurs 65 million years (Myr) ago. The key evidence for this is a layer of iridium across the globe at the end of the Cretaceous period. Other claims that this could have been due to a supernova or from volcanoes were very carefully examined and found to be wrong. Not merely unlikely, but quite wrong.
|Image credit : me.|
|From this summary article. The arrows are separated by 26 Myrs.|
I've decided that this review should concentrate on the philosophy of science aspects of the book. Hence while the book itself forms a clear, straightforward narrative, I'm going to organise this review thematically. This deceptively simple theory provides an excellent lens through which to examine the complex nature of the scientific method. As I've described before and no doubt will again, the forefront of research has little resemblance to the fact-checking approach taught in schools.
Is It Really A Theory ?
"But I thought 'theory' had a special meaning for scientists", you might say. Yes and no. I suspect I've probably put about confusing mixed messages on this myself, so it's worth trying to clear this up. Throughout the book, Muller uses "theory" just like anyone else does : a synonym for "model" or "possible explanation that fits the currently-known facts". He never uses it with the special sense of also being extremely well-tested, which is a common retort to anyone who says, "just a theory" today. Indeed, he even uses, "just a theory" himself. So do real scientists, all the time. Happens every day.
While it would certainly be extremely useful to have a distinct word for very well-tested model, as opposed to hypothesis, the reality is we just don't. The word isn't used like that. To hell with whatever the "official" definition is - it's usage that matters. And as I've described very recently, even very sophisticated models which explain the facts with very impressive precision can still be utterly wrong. So is it fair to say, "only a theory" after all ?
Without some rigorous, objective definition of "very well-tested", I think it has to be done on a case-by-case basis. Certainly you cannot say that evolution is only a theory, because speciation has been observed to be happening. And it's certainly not fair to say that relativity is "only" a theory either, because its predictions have been verified time and time again with insane precision. It could still be wrong, but it makes no sense to call it "only" a theory - that cheapens the immense workload involved in both creating the idea and testing its predictions. But you definitely can say "only a model" for lesser ideas that have not withstood decades of careful testing.
Extraordinary Evidence Requires Extraordinary Claims
Or as Muller puts it on page 4, "ludicrous results require ludicrous theories". This is the flip side of the famous quote, "Extraordinary claims require extraordinary evidence". Not everyone likes this, though I tend to agree with it. If your claim (or theory) is in stark contrast to very well-established results, the burden of proof is firmly on you. To disprove a widely-accepted idea (except during those rare cases* when the establishment has reached a stupid conclusion), you ought to have pretty good evidence against most (not necessarily all) of the major points. Collectively, the strength of this evidence would have to be extraordinary - though, importantly, each individual piece of evidence need not be especially strong.
* There was a very nice recent article I wanted to link here, on describing what actually happened to scientists who claimed that animals could think, but I can't find it. Do let me know if you think you've found it.
But somehow the reverse of this had never really occurred to me - at least, not in such an eloquent way. In fairness I have mentioned on occasion that sometimes evidence forces you to a conclusion you may not like. That is, after all, how science works. The difference is that this is a nice succinct, quotable reminder that sometimes you cannot avoid seemingly crazy ideas.
Don't Be Hasty
Or more importantly, don't be overly-critical. In Muller's words, skepticism needs to be "finely honed". This is one of my favourite aspects of the book, a running theme for which many examples are given. It's a simple enough idea which I've pointed out elsewhere many times : if you really attack an idea too strongly (especially when it's in its infancy), you can shoot anything down - even really good ideas. It's the difference between true skepticism and denial.
Knowing where to draw the line isn't easy. Muller gives many examples of when he and others seemed to be straying into denial (and even outright abuse) rather than skepticism. Indeed the book opens with his former PhD supervisor arguing that an idea is just stupid based on authority. Obviously real scientists aren't supposed to do that, but they do anyway. Although Muller wins the argument, near the end of the book we find his supervisor apparently hasn't learned anything, dismissing a related idea as "just nonsense" without examining it. Yet these are exceptions - far from being a critique, much of the book actually feels like a bromance with his supervisor. Scientists aren't saints.
And Muller gives plenty of examples of when he spots his own over-skepticism too. Toward the end, he gets word of a similar rival theory in which he finds a flaw. He called the authors :
"A few days later he called back and said my criticism was indeed valid, and their old theory was in fact wrong. But now they had a revised variation on the theory that didn't have the same weakness... I asked myself why I hadn't attempted to salvage their old theory, rather than just knock it down ? I realised once again that I had been getting lazy. I had a theory of my own, and I was trying to disprove other theories. I wasn't trying to find alternatives that worked."The Platonic ideal of science is that it's about finding out what's happening. In practise, it's at least as often about disproving your rivals - which is part of the reason why peer review is important. It's usually a lot easier to find out what doesn't work than what does, and all too often we fall into the trap of trying to shoot people down rather than uncovering the truth. Which is not to say that some ideas don't deserve to be shot down. It's complicated. A couple of passages deserve to be quoted at length :
"Skepticism, the ability not to be fooled, was clearly important, but it is also cheap. It is easy to disbelieve everything, and some scientists seemed to take this approach. Sometimes Luis was skeptical, but more often he seemed to embrace crazy ideas, at least at first. He rarely dismissed anything out of hand, no matter how absurd, until he examined it closely. But then one tiny flaw, solidly established, was enough to kill it. His openness to wild ideas was balanced by his firmness in dismissing those that were flawed. He had a finely honed skepticism... A scientist differs from other people in that he knows how easily he is fooled, and goes through procedures to compensate."And later :
"Scientists are trained to be skeptical, to doubt, to test everything... but they never mention that too much skepticism can be just as bad as too little. When presented with a new, startling, and strange result, it is easy to find flaws and come up with reasons to dismiss the finding. Even if the skeptic can't find an outright mistake, he can say, "I'm not convinced". In fact, most scientists (myself included) have found that if you dismiss out of hand all claims of great new discoveries, you will be right 95% of the time. But every once in a while, there will be that rare occasion when you are wrong. Likewise you cannot afford to lose your skepticism or you will waste your time in hopelessly blind alleys.
How do you develop the right sense of skepticism - when to dismiss and when to take seriously ? How do you argue with someone who has a different level of skepticism ? How do you respond to the statement, "I'm not convinced" ? The best way, the only possible way, is to go on with the work. Be grateful that the competition has not even entered the race, and has left all the fun to you. Someone had once said, "Research is the process of going up alleys to see if they are blind.""One technique I've recently been finding useful is to numerically estimate the odds you think that a new idea could be correct. It really doesn't matter how you arrive at the number, that's not the point. The point is to remind yourself that you might be wrong. You can give extremely high odds if you want, but there is almost always some wiggle room which demands you consider the unknown unknowns. Trying to put a number on it forces you to consider possible alternatives, which you might otherwise not do.
You're Usually Wrong
And these ideas really are outlandish and ridiculous. There was the idea that the dinosaurs were all killed by a HUMONGOUS tsunami that somehow swept entire continents clean. There was the one about hydrogen from the Sun combining with water in the atmosphere to form excessive clouds that darkened the sky. How about the one that comets impacting sunspots get vaporised and blown back to Earth where they cause a magnetic field flip ? Or the one about the Sun going nova, which any undergraduate astronomer could tell them is fundamentally impossible ?
This ties in closely with the idea that you shouldn't be overly-skeptical. What you should do, in my view, is be aware of the alternatives but not necessarily investigate them yourself. That's really a personal choice and you're not obliged to investigate someone else's crazy idea. But if you raise an objection, you're obliged to hear their counter-argument.
For example, right at the start of the book there was the statement that Nemesis' apogee would be about 2.8 light years from the Sun. They considered this to be close enough that it wouldn't get pulled away by other stars. That had me extremely worried that the book was, after all, junk science. 2.8 light years is more than half the distance to the nearest star, so the Sun won't affect it as much as the other stars*. So if I'd been in the room, I'd have shot the idea down instantly.
* They also had more sophisticated models where the orbit wasn't so large but the star still managed to enter the Oort cloud every 26 Myr, but found that these didn't work.
This is still my strongest objection, but they did (eventually) come up with a clever counter-argument. Passing stars, they say, will only influence Nemesis when they are within about a light year or so (not sure where they get this number from) and at the speed they're typically moving this will rarely last for more than 30,000 years. The time between perturbing stars is more like a million years, so passing stars don't act for long enough to do much. And the direction they perturb Nemesis will be random, whereas the gravitational attraction towards the Sun is always in the same direction. They also came up with numerical models to investigate this, though no details were given.
Having experienced first hand how difficult orbital dynamics can be, I ended up less skeptical than at the start. But without more details, I'm still not fully convinced. My perhaps naive concern is that once you end up deeper in the gravity well of another star than the Sun, you're in big trouble. I don't know.
Observations Can Be Wrong
While we're dealing with the things I don't like about the book (few as those are), the thing I found strangest was a consistent attitude that if the observational numbers were in conflict with the theory, then the observations must be wrong. Now, I've just publically rubbished* the EmDrive, which claims to be producing minute amounts of thrust even though theory says it shouldn't - so you might well accuse me of hypocrisy. But the theory against the EmDrive - the conservation of momentum - has literally been tested for centuries, whereas Nemesis had been investigated by a handful of people for a few months. So I found it a mite strange that when they found a number that didn't agree, they didn't immediately regard it as falsifying their theory.
* Google Plus link is liable to disappear. In that thread I argued with people who did not understand that peer review trumps repetition. Repeatedly claiming to get the same result does not strengthen your claim in the slightest unless someone actually checks what you've done, not to mention systematic effects. I summarised this here.
And yet, reluctantly, I'm forced to agree that their approach was correct. They cite several examples where, on checking, the observational numbers were indeed found to be in error. Sometimes those changes were in favour of their theory, but sometimes they were against it. And when they were against it, they did the only reasonable thing possible : they changed their minds. Importantly, they attacked their own findings with the same zeal they applied to others.
By far the best example concerns an early idea that the dinosaurs were killed by a supernova. The experiment they ran to test this looked for minute amounts of plutonium by irradiating a sample of Cretaceous rock. It was not a straightforward procedure. It took two weeks just to prepare, then required a literally all-night session (since the radioactive material produced quickly decays) to make the measurement. Initially, the results were astonishing. It seemed like a clear, decisive victory, and for a few glorious moments it seemed as though a select few people knew what really killed the dinosaurs before anyone else did. And yet... the amount of plutonium detected was too low. So they re-did the entire thing, and discovered they'd picked up contamination from elsewhere in the lab. Their initial result was simply wrong.
Observations have the last word, of course. But observational measurements involve just as much care as the theoretical predictions, and if the two are in conflict, it's maybe not quite so easy to decide which is correct.
Don't Be Fooled
As mentioned earlier, a scientist should be aware of how easily they can fool themselves - even, as we've just seen, with observational results. That's why we demand statistical significance. Sometimes raw numbers aren't enough, which is demonstrated in this case by the extinction and cratering periodicity. Luis Alvarez, Muller's former supervisor, wasn't convinced by their statistical analysis, finding it too weak to be worth considering. So Muller devised a way to let Luis convince himself. He generated artificial plots of the cratering, some of which were random and some with periodicity. He gave these to Alvarez unlabelled, along with the true data and told him to select the three he thought were the most periodic. Alvarez's selection included the real plot and two of the plots with artificial periodicity - without knowing it, he'd rejected the ones which were purely random.
This is all very convincing stuff when you read it. On reflection, I'm nearly there, but I can't quite make the leap to conviction. It shouldn't be necessary to emphasise statistically weak effects like this. Muller himself admits on his website that not everyone agrees it's significant, and without this, the theory is dead. And yet there seems to also be a matching rate in large craters and, possibly, in magnetic field flips, so if I have to choose I'd say it's probably real.
You might be wondering how an impact could change the magnetic poles. Well, they came up with a remarkably ingenuous mechanism to explain this. The magnetic field is generated at least in part through the molten outer core of the Earth. If the spin of the solid crust and mantle were to change, this would cause a twist in the magnetic field as the core lagged behind. The impact itself couldn't alter the spin enough to do this directly. But the after effects just might. The dust thrown up from the impact, along with the soot from forest fires, could cause a significant temperature drop and increased snowfall near the poles. That shifts a not insignificant fraction of the Earth's water, altering its spin.
But is this just, err, spinning an elaborate idea to explain something of marginal significance ? Possibly. The discoverer of the magnetic field flip periodicity withdrew his claim, saying it wasn't significant enough (though he maintained that extinctions are periodic). Which ties in quite well with the old adage about beautiful theories being slain by ugly facts. Not though, as Muller repeatedly points out, that all wrong theories are devoid of merit.
It's Not Always Fun To Be Wrong
The whole point of science is to find out things you didn't know before. Sometimes the most wonderful and rewarding part of the process is to have your entire world view overturned by a new discovery - it's a thrill, an intellectual adrenaline rush. But not always. Nemesis deals with this in a blunt and incredibly honest way.
Muller notes that when the supernova theory (which his group supported) was proven wrong, this was still progress. It also wasn't their idea originally. It's easy to be pleased when you've disproved someone else, even if you agreed with them, which is another reason for peer review. If, however, your goal is to find out what's really happening and not merely disprove ideas, even this can be disappointing rather than elating. Muller makes it clear that he went through a lot of long periods of frustration as he struggled to work out what was really going on. Wrong idea followed wrong idea with no light at the end of the tunnel.
Sometimes these days there seems to be a lot of emphasis on the idea that you can't ever prove anything in science. I profoundly disagree. The Earth was proved to be round by observations. Evolution was proved to happen by observations. The Universe has been proved - near as damn it - to be expanding by observations. The existence of Neptune was predicted by theory then proved with observations. As long as you accept the existence of an objective, measurable reality, then of course you can prove a theory. Which is why the ideas of the Universe being a simulation or an illusion are just unscientific gibberish (Note : I haven't read those links). Proof may be rare, but it does happen.
Muller also notes a difference between theorist and experimenters : if an experimenter publishes a result which is found to be wrong, it will damage their reputation, whereas if a theorist comes up with a wrong idea then it does them little harm so long as it's clever. The issue is, of course, one of competence. Experimenters are supposed to understand how to use the equipment correctly to get the right numbers - if they get something wrong, they've exposed their own incompetence*. Theorists are much more free to speculate.
* Which is not to say that they can't ever make mistakes. Muller recounts how Alvarez was delighted with him when he dropped and broke a $15,000 piece of equipment, because these things just happen even to the best of us.
Even for theorists, though, Muller notes how he felt free to discuss outrageously stupid ideas with some colleagues but not others. No-one likes exposing their own stupidity in front of strangers, but being able to discuss ridiculous ideas with trusted colleagues is vital. The thing is, when you've established a strong and prestigious reputation like Muller, you can get away with exposing the stupid ideas you came up with along the way. The rest of us can't afford to do that. Transparency isn't always such a virtue.
At first, Muller says he enjoyed the attention, despite already being a prominent scientist. But then he's honest enough to confess that he likes prestige and was jealous of Alvarez when he demonstrated that a meteor likely killed the dinosaurs. Maybe science shouldn't be motivated by such mundane trivia as money or fame, or have political concerns about whether publishing a paper will damage one's reputation - but in the real world all these things happen, like it or not.
Soon though, he began to realise the damage the attention was doing. He was, he says, secretly pleased that the New York Times ran an article insulting the Nemesis theory, even comparing it with astrology, as this meant he couldn't be accused of doing science by press release. This quickly soured. Local news agencies always go to local scientists for comment, but often they haven't read the original paper so they're only commenting on the press releases. Like Chinese whispers, errors quickly multiply, and the theory's reputation suffered.
Worse, and much more surprising, was that scientists were not just giving quick responses to requests for media interviews - in which case the errors would be understandable - but also writing rebuttal papers without having read the original. Or as Muller says :
"When colleagues asked me whether we had any response to the 'latest criticism', I often responded by saying, 'Yes, and you can read it in our original paper.'"Even the scientifc journals were not above dubious behaviour. The original draft of their paper contained a wonderful footnote :
"If and when the companion is found, we suggest it be named NEMESIS, after the Greek goddess who relentlessly persecutes the excessively rich, proud, and powerful. Alternative names are : KALI, the "black", after the Hindu goddess of death and destruction, who nonetheless is infinitely kind and generous to those she loves; INDRA, after the vedic god of storms and war, who uses a thunderbolt (comet ?) to slay a serpent (dinosaur ?), thereby releasing life-giving waters from the mountains, and finally GEORGE, after the saint who slew the dragon. We worry if the companion is not found, this paper will be our nemesis."I love it. It injects both self-doubt and self-deprecation. The journal, however, promoted the footnote to a paragraph and deleted all the names except Nemesis without the author's permission. I think that's pretty awful. Clearly, the people have been working hard at making papers unreadable for some time. Depressingly, problems in the media extend from the lowest tabloid to the most prestigious journal.
Nobody Reads The Literature Any More
This is a phrase oft-repeated throughout the book. And it's perfectly understandable, because most papers are unreadable. Now, obviously scientific papers don't need to feature lolcats. They have to contain the nitty-gritty details other researchers need to understand exactly what was done, and for heavily mathematical works there's only so much that can be done to make them readable. But there's absolutely no reason at all the comment about George had to be taken out, any more than the name of Boaty McBoatface needs changing. The occasional mild chuckle will do absolutely no-one any harm and an awful lot of good, because people will be far more willing to read papers thoroughly.
Whether the implication of, "any more" that people read more literature in the past is true, I don't know. I suspect not. At least, I haven't noticed any obvious trend for older papers to be any more readable. On the other hand publication rates are higher so the sheer volume of material now makes reading the literature a daunting prospect.
This is a large topic. All I'll say is that the state of academic writing could be easily improved with a very few changes : the authors should feel free to speculate provided they are clear they are speculating and don't contradict the facts, colloquial expressions should be permitted in moderation, clarity should be emphasised over brevity, obfuscation should be seen as a black mark, a narrative flow should be encouraged where appropriate, and above all meaning and implication should be as explicit as possible - to the extent of assuming the reader hasn't been studying that particular topic for fifteen years.
Epilogue : Death of the Death Star ?
To summarise : it's a fantastic book. You should buy it. But the obvious question must be asked - is the theory correct ? Well... no, probably not. Any fool could have told them that their predictions of detecting the star in three months were wildly optimistic. You don't need hindsight to see that, just practical experience of doing astronomy. And yet, more than twenty years and many large-area surveys later, the silence from the sky is deafening.
Absence of evidence is not evidence of absence. Actually that's not entirely true : you most certainly can have evidence of absence; you can, with difficulty, prove a negative. You can't necessarily prove that Bigfoot doesn't exist somewhere, but you can prove he doesn't exist in this particular patch of forest. In this case it seems there have been enough surveys that an object with the properties of Nemesis would have been detected by now if it existed - or at least, that's the claim. Having read the book, anyone with any sense ought to get the message that science is hard, and jumping to conclusions without having read the literature (even if that it is really tedious) is foolhardy. It's not always obvious to spot where the mistake is.
Is it junk science ? Like hell. Is it drivel ? Screw you ! Regular readers will be aware of just how much time I spend fighting the pernicious myth of the dogmatic scientist - but when people behave like this, they are indeed being dogmatic. You're not helping me out here guys. And yet... Nemesis is a true story of science, warts and all. It's full of people at their best, when the evidence changes their mind, and at their worst, when they dig in their heels and refuse to listen to reason.
It's also a story of drive, determination, doubt, human fallibility, and sheer complexity. I am left in no doubt that they team did the absolute best that any reasonable human being could expect of them. But ultimately it's a reminder of the importance of the long game : trust no new results, because there almost certainly hasn't been time to check them properly, but don't dismiss them out of hand either. Treat media claims of "mystery solved" as though they'd reported the discovery of the Loch Ness monster. Real science is a process of continuous self-doubt and external criticism. Even if there is a key paper that solves a mystery, it usually takes years before you can be sure of that. Don't be hasty indeed.
I've really only scratched the surface with this pseudo-review. I haven't told you half as much about Muller's forthright attitude as I should, how he admits to thinking his supervisor was being overly-skeptical or did the whole project as an excuse to work with his estranged son. Or how Muller himself felt jealously and envy - even to the point of being relieved when a Nemesis candidate turned out to be false because his team hadn't found it. Nor have I said much about the continuous process of investigating new ideas, with the many knife-edge moments when it looked like the whole thing would come crashing down. I think it's wonderful. If I have to rate it, I can't give it anything other than 10/10.