Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Saturday, 18 February 2017

An Open Letter To The British Political System

Dear British politics,

Have you ever watched Star Trek : Deep Space Nine ?  I doubt it, since you're an ill-defined abstract entity and not a person. But that's okay, because I have, quite a lot. The central plot revolves around the conflict between the Federation and the evil, all-conquering Dominion, a race of shape-shifters from a distant part of the galaxy. Of particular interest are their local allies, the Cardassians. Don't confuse them with the Kardashians (or vice-versa as I did), because that will cause no end of trouble. Look, I've made you a little guide to clear things up.



The Cardassians are a species our Federation heroes are well acquainted with. Once a fearsome power in the region, their influence has steadily diminished. A recent war brought them to their knees, leaving the conquerors on the verge of becoming themselves the conquered. Then, with classic storybook timing, a rogue, exiled leader returns in a spectacular coup, not only seizing control of Cardassia but forming an alliance with the Dominion. Instantly the tables are turned, and a resurgent Cardassia goes on the warpath. Ultimately of course, because this is Star Trek, this wholly amoral course leaves Cardassia a broken ruin.

The rogue leader is a man named Gul Dukat (pictured above), who is someone the Labour Party should be very familiar with. Sly, duplicitous, eloquent and erudite, a consummate politician and (eventually) wholly corrupted by his own greed and lust for power, but also a man of genuine and deep emotion and great intellect, he signs away Cardassian freedoms to restore his people's former strength. Riding the tiger of his far more powerful ally is a task uniquely suited to a man of Dukat's dark charisma, which his successors are not able to emulate. The whole precarious edifice eventually, inevitably, collapses. Chaos ensues. Our Federation heroes emerge victorious.

A bit crude, but then so is this analysis.

Of course in reality Tony Blair is not Gul Dukat (nor is Theresa May); America is not the Dominion; the alliance didn't leave Britain a devastated wreck. Do not make the common mistake in thinking that analogies are fatally flawed because of their differences - that's why we call them analogies. They are useful to draw attention to similarities but we should not get carried away - we should neither infer that because Gul Dukat wanted to destroy the Bajorans that Tony Blair was on a mission to annihilate his enemies, nor fail to acknowledge similarities and their obvious inferences where they exist. Dukat lied to himself about his goals just as Blair did, but that does not mean Blair is secretly a genocidal maniac. He may really believe in establishing peace as a just cause, but he's delusional if he really believes he's the one to do it.

"Delusion", by the way, is an interesting term. Clinically it is not an absolute. It simply means believing something which is utterly contrary to one's other pre-existing beliefs.

The wheel has turned and turned again since the Blair era. Blair jumped in bed with a hugely unpopular American president (how naive we were !) out of sheer self-serving ego. Theresa May is doing the same but with someone infinitely worse for a more complicated reason. Gul Dukat was wonderfully self-deceiving, genuinely convincing himself he was doing good as a cloak for his own innate evil. Only under immense pressure did the façade finally crack and reveal his true self; Blair's might not even be a façade at all. He isn't hiding some deeper inner darkness, he genuinely and sincerely believes he was doing the right thing.


Bush's America was not much like the Dominion. Trump', at worst, threatens to become more similar. Certainly Trump and his colleague's utter disregard for the virtues of the truth could be cast as analogous to the inherently untrustworthy nature of the shapeshifters. Trump is barely a hair's breadth from post-revelation Dukat or the Dominion leaders, a Bond-esque supervillain hell bent on villainy for villainy's sake.

Yet you, oh British politics, have utterly failed to learn the lessons of this. Like Dukat's successors, our currently leader is far less competent - but determined to try the same trick with a more dangerous and deadly ally. And whereas Blair didn't strengthen the Anglo-American alliance out of desperation like Gul Dukat did, that's exactly what May is doing. Cardassia became an impoverished state through its aggressive military policies; Britain is making itself a pariah through Brexit, which in turn is driving us into the arms of Trump. If you haven't watched Deep Space Nine, now might be a good time to start. You can get in on Netflix and claim it on parliamentary expenses, I expect.


Instead of fighting this self-destructive course, almost the whole of you lot seem determined to win the race to the bottom. "We cannot go against the referendum just because we do not like it", you say. And you are absolutely right ! There are a whole bunch of much better reasons you can go against the vote. Shall I count the ways ? It was non-binding. The campaign was based on lies admitted by the Leavers almost instantly. Most other polls show a preference to remain. The margin of victory was so narrow that Leavers pre-emptively suggested a second referendum if it had not gone their way by the same margin. The decision will have profound repercussions for decades. There was a very strong expert consensus in almost all fields that this is a bad idea. It wasn't known that we'd be making deals with a figure as unpopular and monstrous as Trump as a result of this. There are more, but that should suffice to give any rational individual pause for thought.

Now, only a fool would completely ignore the large disillusionment with the E.U. - the result of the vote must be in some sense "respected". But what you're doing, you the Mother of Parliaments, is far beyond merely respecting people's legitimate concerns about the nature of the E.U. - you are opting for the worst form of Brexit possible. There wasn't any option on the ballot paper for that, there was only the question of E.U. membership. It was up to you to make the best of this situation. And there certainly wasn't any option saying we should sign a deal with the devil.

What you seemed to have got very confused about is the nature of protest. Yes, we have the right to protest over any damn thing we like, be it the poor quality of Vin Diesel's acting or the oppression of minorities - but let's not pretend for one minute that the two are morally comparable. They are not. One is a subjective opinion about entertainment we can easily avoid that does no-one any harm, the other is a factually destructive force. Just because we're offended by bigotry and are protesting about it, it does not follow, as you seem to think, that we are merely angry protesters who can be dismissed as sore losers. This is not whining for the sake of whining or even simply because of strong moral views - it's protesting a course of action which, as sure as anything can ever be, will cause all of us harm. Protesting against xenophobia is not the same as protesting because we're unhappy about closing the local library.


Xenophobia played an enormous role in the Leave campaign, but now you seem absolutely determined to do bugger all about it. Indeed, by embracing any deal with Trump you are also embracing bigotry. The man is a monster; you cannot simply say, "but he's American" as though that was some sort of innately virtuous thing, because it isn't. More on that later.

I reserve especially harsh criticism for you in the Labour party, which I have so long supported. You are fully aware that Brexit will cause economic hardship and moral bankruptcy, and you are nearly all equally aware that your leader is a moronic fool. Yet most of you decided to vote for Brexit anyway. And I do not shirk from ad hominem attacks in the case of both Trump and Corbyn, since my criticism of them is not just aimed at their policies, but directly and specifically at the people themselves. For while it is a good guideline that only small minds discuss people, there must surely be an exception in the case where those people are our leaders and we find their whole character to be wanting.


Trump is a villainous monster. I despise Trump not only because of his policies (which are despicable) but for his whole character : his "special snowflake" temper tantrums, his bigotry, his lying and constant bullshitting, his raging ego and megalomania. Those are qualities I do not want the leader of America to have regardless of his stated policies. In Corbyn's case, the situation is much more extreme. With the notable exception of Brexit it is not his policies I oppose at all, but solely his character. The brown-coated little despot refuses to acknowledge that anyone but himself can have principles, or that those principles could be different from his own. He will not give up power even when he doesn't have any. The man is not a monster like Trump : a more appropriate adjective would be "pathetic". You have to remove him, somehow, otherwise Blair's prediction of annihilation at the next election looks more and more likely. You do not have to rally around him simply because he's your leader.

There is a common theme among all of these problems, which has been gnawing away at me for some time without the ability to adequately express it. An appropriate description finally struck me while reading Plato one morning on the metro. His method of examination by question and answer often employs trivial, even tautologous statements as a beginning. Yet in the process of discussion those seemingly trivial statements are often revealed to be anything but, sometimes shown to contain profound truths while sometimes demonstrated to be self-contradictory.

What I am driving at is this : are freedom and democracy inherently good things ? Plato might suggest the more basic starting questions : don't good actions always lead to good outcomes ? Can any thing be called good which leads to a bad outcome ? And perhaps it might, if that bad outcome eventually led to an overall improvement, if it taught a lesson in how to avoid such things in the future; perhaps the immediate outcome is not the whole story.

We could debate such matters endlessly - and these simple question most certainly do no have simple answers - but my point is that neither freedom nor democracy are necessarily virtuous in themselves. Not if they lead to things which are wholly negative, that cause nothing but suffering. And while it's good to let people make their own mistakes and learn from them, letting them fall off a cliff could hardly be called a sensible way of educating them about gravity. Furthermore, although significant levels of "Bregret" have been expressed in the wake of the vote, there is also a hard core of Brexiteers and Trump supporters who will never learn. They would keep shouting about our glorious future even as everything they sought comes crashing down around them. This is not a cloud with a silver lining, a harsh but necessary lesson, it is simply a disaster. One we should avoid.


Political leaders supporting Brexit are a consequence of thinking that democracy is innately virtuous, and that the more democratic the process, the better the result must be. British MPs supporting Trump are a consequence of thinking that America is innately virtuous. And Labour MPs still supporting Corbyn are a consequence of thinking that their leader is somehow innately virtuous too; if not in character then merely deserving of support because he's the leader, which has much the same outcome.

But these things are not innately virtuous. We should not value freedom and democracy just because they are freedom and democracy, but because we believe those things will enable us to lead good lives that we want to lead. They should be seen as means to an end, not an end in themselves. Freedom that leads you to unjustly hate and demonise those who don't belong to your social group is no virtue at all, nor is such discrimination somehow more palatable because it was enacted by a democratic vote.

There are those who quite rightly raise the issue that trying to suppress freedom of expression is essentially the very thing that we profess to despise about fascists, that if we do this we will be no better than the fascists themselves. I have to disagree. Suppressing bigotry, hatred and intolerance is not morally equivalent to allowing bigotry and intolerance and the infinitely greater suppression of free speech that fascism entails. How hollow it sounds to defend freedom of expression while allowing that freedom to be used to neuter itself !

Or as this article puts it :
To be blunt: Nazism is democracy’s anti-matter. There is nothing about the ideology or its practice that is anything but corrosive to democratic institutions. 
Fascism is a cancer that turns democracy against itself unto death. There is no reasoning with it. It was specifically engineered to attack the weaknesses of democracy and use them to bring down the entire system, arrogating a right to free speech for itself just long enough to take power and wrench it away from everyone else. Simply allowing Nazis onto a stage, as the BBC did when it let British National Party leader Nick Griffin sit and debate with political luminaries on its Question Time program, is to give them an invaluable moral victory.  
In using this tactic, Nazis abuse the democratic forum to illegitimately lend credence to something that is otherwise indefensible, the equality of the stage giving the unforgivable appearance of “two sides” to a position that is anathema to public decency. This is not because Nazis love democracy or free speech, but because they know how to use this strategy to unravel them.
The very election of Trump, a man who inspires white supremacists (intentionally or not, it hardly matters which), demonstrates the flaw in the democratic process. The consistent lies and hatred stirred up by the media demonstrate the flaw in allowing absolute, unrestricted freedom of speech. No, you can't shut people up just because you disagree with them. You can shut them up because they're causing people to suffer needlessly. Actual real people physical suffering. Racism and xenophobia are not things I merely disagree with or dislike, they are things which have been factually proven, time after time after time, to be damaging and destructive. There's no moral superiority to be had by allowing these things to flourish unchecked.

"Flourish unchecked" is a phrase very carefully chosen, for while freedom and democracy are not intrinsically morally perfect, they are certainly preferable to the alternatives. A world of censorship, tyranny and control is the goal of the fascists, not mine. Indeed as a scientist I will tell you that dissenting voices are, ordinarily, not something we have to endure or even merely tolerate, but actively encourage. We should seek out alternative viewpoints so that we may discuss them, investigate them and see if they fare better or worse than our existing ideas.

But these are not normal times. We aren't having heated arguments with people we passionately disagree with - we are in the political equivalent of replacing members of the Royal Society with members of the Flat Earth society. This is not virtuous, this is stupid. At the very least we should not take it for granted that normal responses will extricate us from this radical situation, any more than if we were to propose that "everything will work itself out in the end" if lunatics took over the asylum. It's not that I think the flat Earth is a wrong idea because I don't like it, it's that I don't like it because I know for certain that it's wrong. So it is with the political state of affairs also.

So we must be careful, but not so cautious as to avoid any action for fear of offending the obviously bigoted. The bigoted cry, "political correctness gone mad !" in an attempt to censor anything they do not like, but are all too happy to cry, "freedom of speech !" as a tool of oppression. For God's sake let's stop being so damn timid about the whole thing, especially our MPs who seem insistent on treating bigotry and hatred as though it was something that deserved protecting for the sake of freedom. It isn't. For the sake of freedom, it must be destroyed.

The line must be drawn incredibly carefully and with constant vigilance and re-examination, but it must be drawn. We cannot pretend that simply re-stating our arguments again and again, be they expressed kindly or vitriolically, will somehow eventually start to work and win these people over, for that would only work if they cared about the facts. They do not.


Which is not to say that we will have a better society only by reform of the media or freedom of speech laws. No, that is another task. All I'm asking for here is damage limitation. First we must rescue ourselves from the current crisis before averting the next one.

So my message to you, the political system of Great Britain, is quite simple : grow a bloody backbone. Fight this madness. Don't sell us down the line to a monster. Don't be fooled into thinking that democracy and freedom are their own rewards. Don't let democracy become tyranny by mob rule, for the narrowest of wins in a single vote to determine the course of politics for decades. Don't let the media continue to pour a stream of xenophobic lies in the name of free speech. Seek to end injustice, not support it. That is the ideal to which you must strive : a better world, and though freedom and democracy are critical to that goal, they are not the goal in and of themselves. They are tools, and they must be handled correctly - or fascists will not hesitate to weaponize and pervert them just as they always have.


Tuesday, 14 February 2017

You Can't Not Prove It Wasn't Me Who Didn't Do It


Science is about working out how the world actually functions. It doesn't have any truck with hocus-pocus mumbo-jumbo or fairy stories about some magical deity running the show - it's about solid, hard facts, evidence and proof. Nice, comfortable, iron-clad certainties all the way ! On the other hand of course science doesn't actually know anything at all because you can never prove a theory, you can only prove what doesn't work.

Both of these are extreme positions and like most extreme positions they are extremely silly. The first is typically adopted by those with an anti-religious agenda; the latter by those who are unduly skeptical, overly-enamoured of pure theory and those who don't seem to realise that science is not just about disproving things but that the whole point is to try and figure out how the world actually does work.

In fact, contrary to what many people believe, yes you can prove a theory - provided you're careful to define just what you mean by "prove". And "theory" for that matter.


Alternative Facts Are Just Lies With Better Marketing



Conventionally, evidence is not the same as proof. Proof, in the strictest sense, would mean absolute, inviolable certainty. Once something has been proven, it can never be un-proven. Proof, like pregnancy, is binary - you either have proof or you don't. Evidence, on the other hand, is merely information consistent with one model while disfavouring others. You could have mountains of evidence supporting one idea but not actual proof. Evidence is a continuous variable - you could have just a little bit or a whole overwhelming mountain of the stuff or something in between. Sometimes it's hard to choose between ideas if they have equal amounts of supporting evidence, but this does not mean that all ideas deserve equal weight.

In the strictest sense, absolute proof of anything is impossible - even mathematics. One can always fall back on the idea that all of reality is illusory and we're all being deceived by mind-altering drugs, or some godlike creature running a simulation - we can't trust our senses or our memories, not completely. And this is true. Everyone should stop to think about these ideas from time to time... but in everyday life, it's bloody useless. You can't go around doubting whether your socks are really real or if gravity exists.

So for the purposes of both everyday linguistic simplicity and scientific inquiry, we make some simplifying assumptions. We say that we can trust our senses and memories just enough - we might have to keep checking, but they aren't being continuously deceived. They might not give us a full view of the universe, but what they do tell us in in some sense real. The world exists objectively of our senses and we can, if we're careful, measure it with something approaching objectivity. It is real, not an artificial, whimsical construction that could be shut off at any moment, and it doesn't depend one whit on our subjective feelings.

Questioning this most basic of all scientific assumptions isn't really much use outside the philosophy classroom but it's good to be aware of it. The kind of people who insist that science is all about facts and certainty have forgotten this, and usually go several steps further.

But within this assumption it's obvious that there are things called "facts" which can in some sense be meaningfully "proven". There is something called a "wall" which if I bang my head against really hard will hurt, dammit. But what is a wall ? Ahh, there things get complicated. Modern theory has it that the wall is made of atoms consisting of protons, neutron and electrons. Electrostatic forces between the atoms of my head and the wall prevent the two from overlapping. Perhaps, one day, our theories about the nature of matter will change -  but I can state with certainty that something called a wall will still exist. Its existence has been proven. It is a fact. Just as something called gravity definitely does exist, but our ideas about it have changed markedly over the centuries.


So long as you accept (at least the assumption of) reality as objective and measurable, you will inevitably have to deal with facts. Facts are easy. I measured this temperature when I stuck the thermometer up my cat's backside; I saw that many people at Trump's inauguration*. But facts are snapshots - they tell us exactly what reality was like in extremely specific moments and circumstances. They don't tell us, by themselves, how the world actually works or what's going to happen next. Facts describe how things are, not the underlying causes of change.

* Finding the connection between the temperature of a cat's anus and the number of attendees at Trump's inauguration is left as an exercise to the reader.

For that we need a model, a hypothesis, a theory. Some description of the way the world works that agrees with the facts and lets me predict what will happen in the future (or other circumstances) provided I have sufficient information. Hypothesis and theory are terms which are often used interchangeably, but it's useful to differentiate : hypothesis generally means a model which has only a little supporting evidence (or none at all but is at least consistent with the established laws of nature), whereas a theory has lots. There's no rigorous definition of what constitutes little or lots, but the relative difference is important. And for that matter scientists don't always use these terms very strictly, but I will do so here in order to make things as clear as possible.

So given that there are hard facts, and models make predictions about facts, can we ever prove a model is correct ? As a hypothesis gathers evidence and becomes a fully-fledged theory, can it ever accumulate so much evidence that it transmutes into true proof and the model itself is accepted as fact ?

Yes, it can. But the limitations of that theory must be very, very clearly specified.


You Can Prove A Theory, But Don't Be Hasty

This little dude makes for a surprisingly good metaphor.
Suppose that you leave a big plate of delicious chocolate chip cookies out on a table. Every day for a week, you find there's one missing and a trail of crumbs leading across the floor into a little hole into the wall. Your conclusion* ? A mouse has been stealing your cookies ! The little furry bastard ! That's a very solid hypothesis : it fits everything you know about the behavioural characteristics of mice and the structural properties of cookies. You haven't actually seen the mouse though, so it's not a theory. Maybe it's a rat. Or a snake. Or a particularly industrious snail or a very small elephant. These hypotheses aren't as good as the mouse hypothesis but in a strict sense they are still valid.

* Wait, conclusion ? Yes. A conclusion is just what you think is going on, regardless of the strength of the evidence. It's usually, but not always, associated with a chain of logical reasoning.


"These are small, but the ones out there are far away".
So you stay up one night, keep very still and wear some night vision goggles. And lo and behold, you spot the mouse !

Your hypothesis has now become a theory. The evidence supporting it isn't merely consistent with your pre-existing knowledge of how the world works - it's independently verified and extremely strong evidence. You could get even better evidence with a video recording, so that you can check it repeatedly and show it to your skeptical colleagues.

You might wonder if your theory is now already a proven fact. Well, no. You've only observed the mouse this one time. Even if you keep observing the mouse time after time and keep records with a thermal-imaging camera and get multiple witnesses (wow, you're really going for this !) you won't ever be able to prove it was the mouse that did it before your observations began. So you're proving there's a mouse that is now stealing cookies; you have "only" a theory that the mouse has been stealing cookies for the past week. The oft-derided term "only a theory" is both right and wrong here : a theory is not as good as proof, but it's still better than a mere hypothesis.

Of course, this particular theory is really an excellent theory. There's not really any sensible reason to doubt it - you have a great explanation for the current observations and no evidence that anything else was ever at work. The only way to prove it, however, would be if you suddenly discovered a CCTV camera had recorded footage of the entire week showing that the mouse was entirely and solely responsible. Otherwise it's still just about possible that the mouse only stole one cookie and the rest were taken by a snake wearing a tutu.

A hat is fine too.
This means your theory has retroactive predictive power - it predicts what was happening in the past. If you're very lucky, it's testable - you have to have a source of observations for the previous week. So your theory that a mouse ate all the cookies is proven; it has transmuted from a model into solid observational fact.

But you can extend this to predict what will happen in the future. In this case, your theory immediately becomes unproven. You can't travel into the future, observe the mouse and travel back again - and if you did, you probably shouldn't be allowed a time machine because you're much too boring and responsible. In a sense, all theories about the future* are inherently unproven by definition**.

* Or indeed in other rooms with mice and missing cookies right now. The important point for predictions is not that they predict that things will happen in the future but that they state what new observations will uncover, even if those observations were taken years ago but never examined.
** Well... within reason. If your theory depends on only a very limited number of variables you can get away with calling it proven - as we shall see, few would argue that we can't prove the existence of gravity or reliably extrapolate its continued existence in the future.

But your theory is only unproven for now, it is not inherently unprovable. You just have to wait and see what happens. However, because you can't guarantee exactly what will happen in the future, you need your predictions to be more specific. You already know the mouse was alive in the past to eat the cookies (and you only specified a mouse, not a specific mouse) but you can't be sure how long this mouse will live. So your prediction, for a provable theory, would be something like, "a mouse will continue to emerge from that hole and steal cookies provided everything remains as it is now - that is, there must be at least one mouse who lives in that hole who is willing and able to eat cookies".

This safeguards the crux of your theory against unforeseen eventualities. If you'd simply said, "the mouse will keep eating cookies", the mouse might not be hungry for a day or so, or die, or the hole might be blocked up or the whole house demolished. Your theory would be disproven, but then your theory was pretty stupid because it assumed the mouse was able to magically overcome impossible difficulties. It would even have to invoke necromancy if you don't specify that only living mice will eat the cookies.

This is of course the Death of Rats, but don't worry, he also does mice. And gerbils.
Of course in everyday life being so specific is not necessary, because we all have common knowledge and assumptions that we don't actually need to state so explicitly. Not so in scientific research where the whole point is that we don't yet know the full story. Here it helps to be as specific and explicit as possible - otherwise a single contradictory data point could be taken as falsifying your theory even though the core of it was basically correct.

To be useful, a theory should have both general and specific components. If it only works for a unique case - only a single mouse in the entire world likes cookies - it's not really going to have much impact. Rather it should say that some fraction of mice like cookies and are able and willing to eat them in the right situations. That way you can apply this to different situations and make a useful prediction : there's a mouse here but there's also a cat, so the mouse probably won't risk stealing food unless starving or the cat leaves. Hence the "mice eat cookies" theory isn't wrong, it's just that the complexities of it mean that not all mice eat all cookies in all situations.

In simple hypothetical examples, it's not so difficult to turn a theory into a fact. This raises two questions : does a theory automatically cease to be a theory when it's proven true, and can this really happen in practise ?


Not Everything Is Just A Model, But Some Things Are

I argue that the answers surely have to be "no" and "yes" respectively. The mouse theory wouldn't lose its predictive power once vindicated : one could use it again easily enough in similar situations. This is much like the theory of evolution - the very basic model is known with certainty to be correct, since speciation has been observed to actually occur. It is both a model of how things occur and it is also the truth, even if all the nitty-gritty details aren't fully understood. That the world is round is similarly both objectively true but can still be used as a predictive tool. Within the ever-present assumption of reality as objective and measurable, these things have to be regarded as true.

Although, on the other hand...
But while the theory of evolution and the spherical nature of the Earth are equally certain, they do not possess equal predictive power. The shape of the Earth is a fact which can be measured to arbitrary precision, hence it can be used as a tool to give predictions of arbitrary accuracy. Evolution, on the other hand, has an unknown number of variables, so its predictive power is far less.

Here the ambiguity of the word "theory" is felt with full force. Most theories are not known with certainty to be true - the evidence for them is a matter of degree. General relativity is an excellent example of a theory that's tantamount to truth, but couldn't actually be called "proven". The evidence that the model works is overwhelming, but it's also known to have serious flaws and few people regard it as a complete description of space-time. It's possible that its most basic tenet of gravity as the result of curved space is indeed true, even if its detailed description of the curvature is wrong or incomplete. We just don't know yet.

Which also illustrates another level of ambiguity. We know for certain that gravity exists, even if we don't know precisely what it is. We also know for certain that it's related to mass and causes things to accelerate towards each other - so we can be certain, absolutely certain, that when a cat knocks things off a table they'll fall down. So gravity is both fact and theory in a way that evolution and the spherical world are not; we are certain of some aspects of gravity, but (arguably) highly uncertain about some very fundamental aspects of it*.

* Or at least we were 100 years ago, when general relativity was not widely accepted.



Conclusions

Yes, you can prove a theory - provided you're careful. You must remember that you're operating within the domain of scientific knowledge and that you're assuming reality is objective and measurable. If you doubt this you can doubt everything and potentially learn nothing. This is not necessarily wrong, it's simply beyond the remit of science. So you can't invoke the "reality is illusory" explanation to cast doubt on anything if you want to remain scientific. Some people would argue that only knowledge obtained scientifically is knowledge of any kind; I am not one of those people. I simply say that there is stuff which is scientific knowledge, and stuff which isn't. Whether the latter is any good or not is another story.

You must also be careful about your definitions. I have chosen "theory" to mean a model which gives predictive power. You could fairly argue that when something is an objective, measurable fact (even when it can be used to make predictions), it is fundamentally different from a theory. That's perfectly reasonable, but not the definition I'm using here. If you want to explicitly differentiate between these observationally proven models and uncertain theories, go ahead, but adequate terms are hard to find in English.

Hard facts certainly do exist, within the framework of scientific knowledge. By extension, so too do irrefutable models. At the other end of the scale there are statements which are utterly false and models which are simply wrong. But most things are infinitely more ambiguous : we know evolution happens but not all details of the process are understood; we have a damn fine theory of gravity but know it's flawed; different theories can have different levels of predictive power even though they've been proven to be true. There are shades of grey with only occasional streaks of black and white.

Science is built on facts but it does not deal exclusively in facts; the usual ambiguities do not make well-established theories untrustworthy or useless. Most theories are not true. Rather, view them as decision-making tools. The best theory typically offers you the best possible decision you can make given all the available knowledge at the time - it may not be correct, but it is the absolute best decision you can make even so.

Ultimately though, science is about the search for truth. Not just getting better and better models, but actually learning how the Universe really works : those rare streaks of black and white are its principle goal. At very simple levels this is easy and has already been done many times - what were once controversial models are now accepted, irrefutable tools. As we progress our models get more sophisticated and harder to verify, and really big breakthroughs may take longer and longer. We may have to content ourselves with smaller steps, but we can still prove and disprove important aspects of different theories. Still, I for one am not interested in perpetual uncertainty. For me, the whole point is that one day - most likely long after I'm gone - someone will eventually be able to say, "Ah-hah ! Now I finally understand what the hell is going on."

"That'll do pig. That'll do." would also be acceptable.

Sunday, 12 February 2017

Fifty Shades of Garching

A career in astronomy almost inevitably carries with it the curse and boon of travel. From the tropical jungles of Puerto Rico to the glaciers of Alaska, from the burning sands of Socorro to the Alpine range of Switzerland... and now to the grey wastes of Garching. This may well be the least exotic place I've ever been to for any reason ever. And that includes Milton Keynes, for goodness' sake.

Cue the least exciting travel blog ever.
Garching is a small, pointless town in southern Germany which for some utterly unknowable reason is a centre of high technology. Astrophysical institutes, cyber technologies, plasma research, General Electric... it's got them all. With a population of just 16,000 you might think this makes it some sort of scientific Mecca, a researcher's paradise without equal, where scientists frolic freely in gay meadows with sunshine and rainbows and lollipops in abundance.

Well, it's got this thing, at any rate.
What it actually is is a bunch of large, grey buildings under a large, grey sky in the middle of a large, grey field. The key point for the designers was that everything should apparently be large, grey, and above all soulless. Why this particular luckless field was chosen to be a centre of European scientific excellence I've no idea. I'd like to say that a cold, grey wind moans across a barren icy waste, but that's too exciting; the poor place can't even be described as a "blasted heath". It's just a big, boring field with some big, boring buildings stuck in it. Even the tea is a sort of grey colour, and I have to reluctantly concede that it's even worse than American tea - which I've long regarded as (in the worlds of Douglas Adams) a liquid almost, but not quite, entirely unlike tea.

I mean now this is just being silly.
The European Southern Observatory building isn't grey. It's brown, which is not much better. Inside it feels uncannily like an airport terminal, except that it's harder to navigate. The building design is a weird triple combination of circular buildings linked by a covered walkway very much like the ramp to an aircraft. Even after a week I still had to consciously think which way I wanted to go whenever I left the office. As for the wider campus, that reminded me of nothing so much as the grounds of a hospital. Not exactly inspirational.



The under-construction "Supernova" planetarium sticks with the brown theme but somehow contrives to make a weird-looking building look reeeeally boring.
My visit was for QA2 training for ALMA. The Atacama Large Millimetre Array is a billion-dollar radio telescope in Chile that's searching the skies at hitherto unexplored frequencies with exquisite sensitivity and resolution. That's what they tell me anyway; while ALMA might pay my salary I can't honestly claim to really know anything about it. Arecibo works at much lower frequencies, and while there's certainly some broad overlap between the two research areas the technical differences are enormous.

At least the ALMA facility itself is more photogenic.
Arecibo is a single-dish telescope. Point it at the sky and it will measure how bright the sky is at that point at whatever particular frequency it's tuned to. Move it around and record the brightness at different points and you can construct an image. The measurements are real - you directly determine the brightness just as with an optical telescope. Oh, you can process the data afterwards in some fancy ways and get different results, but at the most basic level you can (with proper care) actually know how bright the sky is at any point you look at. Easy peasy, relatively speaking.

Not so with an interferometer like ALMA. Interferometers are arrays of telescopes that are combined to produce images with extremely high resolution, equivalent to building a single telescope as big as the largest distance between the individual telescopes. Well, sort of. Not exactly.

The thing is you can't just wave a magic wand and shout, "ALACKAZAM !" and hope to improve your resolution. No, your incantations have to be much more mathematical. You've got to combine the signals from each antenna somehow, and therein lies the difficulty. It would be an understatement to say that the maths behind this is fiendishly difficult, but the bottom line is that interferometers do not measure the brightness of the sky. The images they produce - though marvellous - are reconstructions based on their raw data. They are not direct measurements; you cannot obtain "the right answer" with an interferometer because other ways of processing the data give different results which are equally valid.

As usual, don't go nuts with this. The images interferometers produce are wrong but only for a given value of wrong - they are certainly not fictitious. The point is that with the proper care an interferometer can give you wonderful results if you know what you're doing. I can and have taught students to observe with Arecibo in about an hour or less, whereas with an interferometer it might be optimistic to say "in about a year or less".

Students remotely operating Arecibo from Green Bank.
Which is where QA2 training comes in. ALMA has the incredible and laudable goal of producing a fully automatic pipeline for its observing, so that eventually the scientists will get their data delivered to them in a "science ready" state - that is, corrected so that the images are the best representations of the sky that they can possibly be. What they'll be looking at may not be the whole story and it won't be 100% accurate, but it will be good enough. And this will be done entirely by algorithms with no need for the experienced users to step in and check anything.

Such is the goal. Remarkable progress is being made toward that extremely complicated end, but for now, it's still much easier for a human to gain the necessary experience. So ALMA data undergoes several levels of "quality assessment". First there's "phase 2"*, which consists of coordinating the observing scripts between the scientists and the telescope operators. Then when the observing starts there's QA0** at the telescope itself, making sure all the equipment is working and the weather is nice and so on. No-one seems to have any idea what QA1 is at all - it's presumed to have been lost in the blizzard of acronyms (that list doesn't cover half of them) the telescope generates with a profligacy equalled only by its raw data. After that comes QA2 - checking that the data has reached the requested sensitivity and resolution, followed very occasionally by QA3 when the scientist finds a problem with the data that the QA2 staff didn't.

* Yes, I realise that it doesn't make sense to start with phase 2. I have no idea what phase 1 is or if it even exists.
** Which presumably means there's also a phase 0, further adding to the things I have no idea about. Yay.

As you'll have gathered, they do love their corporate jargon, do ALMA. They routinely say things like, "The DRM at the JAO will open a JIRA ticket and you can download the AOT file and inspect it with the OT to determine if it's TDM or FDM data in TP mode and you can find out lots of other information on SCOPS with communications to the P2G but obviously it's easier if you've already looked at the project during Phase 2". Honestly, they're one step away from, "By streamlining the paradigm we can ensure increased productivity through synergy maximisation but let's talk offline !"

Well, alright, it's not that bad. They're still scientists after all. Nice people that it would be hard to dislike. But while other scientists love to moan about minor bureaucratic procedures, here they don't seem to realise just how incredibly corporate things have become. Maybe this is the future of astronomy, but I hope not.
Learning QA2 procedures if you're already a dab hand at interferometry is probably not that bad. You can certainly learn it inside a week, though I'd hazard you might still think there's some scope for improvement. But what if, like muggins here, you've had barely any experience with interferometry at all ?


Imagine, if you will, an enormous online technical manual from which you need to extract a few key points. Some of these are easily visible but most are hidden in obscure hyperlinks. You can control+F but not search the entire site in any way. Some points are labelled in an obvious way but others require reading and understanding to extract the relevant detail. You'll need at least a dozen different pages from which to extract all the information, but maybe not all at the same time and you can't minimise windows or have multiple desktops (because frak you, that's why). Some information can't be extracted directly from the manual but only by running a series of long, complicated tasks which then require you to check yet more extremely long web pages that are incredibly poorly-labelled and use a program which just plain doesn't work for no reason. The result of all this is access to data which doesn't interest you and you're not allowed to use in any case, and the absolute best you can hope for is that someone will eventually write you a thank-you email.

It's not much fun at all really. In fact it's a lot like completing a difficult mission in a computer game and having it crash at a critical moment - you've got to do the whole bloody thing again. Except instead of monsters and dragons, the most interesting thing you'll see is a fuzzy blob that will eventually be tremendously important to someone else. Not you though.

It got better throughout the week, in that the "long, complicated tasks" became short and relatively easy. Unfortunately the rest didn't improve much. I simply cannot conceive of why there's a version of Linux where you can't minimise windows or even see which ones are active in the taskbar (let alone why it's running at ESO). That alone made trivial tasks very difficult. It doesn't help when those tasks - like retrieving values from files - are made needlessly more difficult by those files being given meaningless names like "textfile.txt" (yes, literally that). Or when there's no clear list of what parameters you're eventually going to need from those files (you often need several so you may as well get them all at once), or which directories they're in (you need at least a dozen different windows open) or a linear set of instructions to follow.

The really, viscerally frustrating thing about the process was not how tedious it was, but how unnecessarily tedious it felt. So much of the really difficult process - the hard mathematics - has been taken care of by the computer, but these final, tiny baby steps feel like they've been deliberately designed to make things as gut-wrenchingly obnoxious as possible. All a human really needs to do is some relatively simple data processing and check the resulting plots for problems. If it's difficult to automatically extract parameters from a file and record them (for whatever reason) then OK, fine, but, come on - there's no need to obfuscate the process to the extent that many times I felt my brain simply shut down and refuse to co-operate any further.

No, I don't want to look for the file again. I already closed it because the instructions didn't say I'd need it again and it took me ten minutes to find it ! AAAAAARRRGHHH !
At that point I realised that no amount of willpower was going to get me anywhere so I gave up and drew on the whiteboard instead.


It would be so, so simple to create a "QA2" directory holding copies of the data files needed to extract the vital parameters from in a single location. I guess maybe if you've done this a hundred times you don't see the need for it, but lordy that's a steep learning curve. The group of us from Prague decided to collaborate and eventually (when we understand the process fully enough) produce our own, linear set of instructions, which ought to help a great deal. Hopefully at some point this learning curve is going to become a plateau...

Ask An Astronomer Anything At All About Astronomy (XXXII)

Whew, we've survived enough to 2017 to make it to another batch of questions ! Hurrah ! This was delayed by a week due to a work trip. With any luck we'll survive another week for the next batch...


1) What's your take on Verlinde gravity, Rhys ?
I don't like it.

2) Does the Bullet Cluster rule out very weakly interacting dark matter ?
No.

3) Can you take as good a picture with a smartphone as you can with a 500m telescope ?
No of course not, you twerp.

4) How come we can detect planets around other stars but have such lousy low-resolution images of ones in the Solar System ?
Sorry, what ?

5) Is there sound in space ?
This one is actually more complicated than Randall Munroe gives credit for, simply stating it in the "weird (and worrying) questions from the inbox" section of his otherwise excellent What If ? book without actually answering it. Here I correct this oversight, because the answer is a definite maybe.

Thursday, 2 February 2017

On Bias


Some words are well-known for meaning different things to scientists and the general public. "Theory" is supposedly one of those, "skeptical" is another. The reality is that even within science, these words are used in a multitude of different ways that's often context dependent. If I say to my colleagues, "it's only a theory", they will not shout at me for deriding the value of a hard-won scientific discovery - that is a purely fictitious idea manufactured by the internet. It's true that theory does have that special meaning, but it isn't used this way with any special rigour or even any rigour at all. Indeed, "theorist" is routinely used as a term for anyone who spends more time on ideas than observations, not someone who continuously constructs amazingly well-tested models.

"Bias" is a bit different. The common meaning is something like an unintended, unreasonable preference : "you're only being mean to my pet tortoise because he bit you once !". And indeed, if said tortoise hardly ever bites anyone, it wouldn't be reasonable to avoid them forever. But in science, a bias can not only be a good thing, it can be essential.


Science is biased ! Oh noes !

Suppose you wanted to discover a tortoises' favourite food. Well, that's easy, you just take your friend's tortoise, plonk down a load of stuff he likes to eat down and... no. That answer isn't even wrong, because the question was silly. You need to be more specific. Try : of all the available food in the house, what does Tim the tortoise like to eat best ? That question you can answer. Determining Tim's favourite food would, technically, involve Tim sampling every food on the planet - but limiting it to what's currently available is a solvable problem.


Which is no excuse to be an areshole whenever someone says their favourite food is "blah", because you bloody well know what favourite means in this context. Or at least you should. I had a friend who was annoyed whenever people said they didn't like rap music, as though they'd listened to all rap music. Which was a very unfair assumption. Obviously a) the statement usually just means, "I don't like all the rap music I've ever heard but it's self-evident that I can't really judge music I haven't heard, duuuuh !" and b) if there's something intrinsic about the style of monotone talking over music that annoys you, chances are you're not gonna like any rap music. Your inference that you don't like any and all rap music may not be 100% iron-clad certainty, but it's a perfectly reasonable extrapolation for everyday life. You're not "biased" against it, you genuinely don't like it.

But suppose you did want to discover the favourite food of the tortoise. Not just Tim but the general preference of all tortoises. OK, let's make this easier and restrict it to, say, the Magnificent Eurasian Tortoise, found all across Europe and Asia and known for its golden carapace and acute financial acumen. Now, if you wanted to find the favourite food of the tortoise in the wild, how should you choose a sample of tortoises to examine ? Should you look only at those in Europe which are easiest to catch ? Should you instead look at the more active tortoises of the Asian steppe, which can run away at tremendous speed and therefore eat more ? Should you only look at very young or old tortoises or a mixture of the two ?

And should you limit your selection to tortoises with rocket packs ?
The more specific your question, the more meaningful your answer. There might not be a favourite food of the species as a whole because its geographic distribution is so large - different tortoises eat different things depending on what's available. But tortoises might, conceivably, have subtle differences such that Asian tortoises genuinely prefer different foods to the European ones. You'd have to give them both samples of the other's food. And you'd have to try it with hatchlings and raise them over many decades then try swapping their food, to be sure they just hadn't got used to some foods in order to tell if it was really an innate, genetic difference or not. Complicated, isn't it ?

So what you do is deliberately bias your question and your sample to get a meaningful answer. You abandon your mad obsession with finding out what tortoises really like to eat and instead limit yourself to determining what they do eat in specific geographical regions at different ages. So you monitor a large sample of tortoises across Europe and Asia and record as much information about them as you can. Afterwards, you split your sample based on things like location, age, gender, and weight, and compare what they actually do eat with what they potentially could eat in each region. Only then can you begin to say useful (?) things like, "young tortoises in Europe mostly eat lettuce whereas those in Asia tend to prefer cabbage, but old tortoises all love carrots regardless of age, location or any other factor".

Had you gone charging in without any of this, you might have just picked a representative sample of tortoises - that is, one that consists of tortoises of all different sizes, ages and genders in roughly the same proportion as in the total population - and said something daft like, "the Magnificent Eurasian Tortoise's favourite food is cabbage." That might be the case overall if the whole population consists mainly of Asian tortoises of a certain age, say - in this case your "representative" sample is actually biased. You, in your thoughtless stupidity, have failed to account for the subtleties of statistical analysis.



The trick to getting a meaningful answer is not to eliminate bias, but to ask a specific question and bias the sample appropriately. It's very much easier to address, "what's the favourite food of young European tortoises ?" because you know exactly what sample you need; a representative sample of the entire tortoise population would give you completely the wrong answer. Bias can be essential.

Scientific bias can also be purely accidental. If didn't go and measure the tortoises yourself but were just given the raw data, you might choose to analyse a sample that inadvertently contained a bias you wouldn't have anticipated. For example if you were interested in what made tortoises obese you might select the fattest 10%, say. But if they all happened to come from the same geographical area, there might be a unique factor at work - so you'd never work out the true cause of tortoise obesity in most cases.

For these reasons, scientific bias is sometimes referred to as a selection bias or selection effect. It isn't necessarily good or bad and doesn't imply the researcher screwed up or wants to prove their pet theory. Those sorts of bias do happen in science, obviously, and that's what most people mean by bias in everyday conversation.


Everyday bias


In everyday usage, we use "bias" to imply unfairness, whereas limiting your sample in a scientific study isn't necessarily unfair at all. It's one thing to find redheads more attractive - that's a legitimate preference - whereas it's quite another to ensure your social study only looks at redheads. Or to give a redhead a job instead of a more qualified candidate. Unless that job is somehow bizarrely dependent on the candidate being naturally ginger, obviously.

Bias and discrimination isn't always unfair. Giving the job to the best people may be biased towards the most educated or the physically strongest, but that's hardly unfair if you're looking for someone skilled in advanced mathematics or a manual labourer. What would be unfair is not giving everyone the same opportunities to pursue those careers in the first place - to say, "no, you're ginger, you can't learn maths." Discriminating on merit is fine as long as everyone's had a chance to compete fairly.

Then there are cases where even an unfair bias is perfectly understandable. Of course people will be surprised if a 100 year old competes in a marathon and no-one would expect them to win. Ostensibly you could just let them enter and wait and see what would happen, but the link between fitness and age is so well-established that expecting them to win would be a little bit mad. You might even suggest, not unreasonably, that perhaps they should be subjected to medical checks you wouldn't ask of younger competitors, for their own safety. Would that really be unfair ? I don't know, but it's certainly a grey area (you probably wouldn't stop them actually competing, though, unless you found a good medical reason beyond "they're very old").


So you might also be forgiven for thinking that perhaps making regulations which restrict the movements of those more likely to commit crimes is understandable. And to some degree, it is. If you insist on shouting incredibly loudly about the crimes of one particular group while labelling them in a different way to the crimes of another - Muslims are "terrorists" whereas gun-nuts are "lone wolves" - then of course people are going to believe there's something different about that group. Especially if you essentially never bring up their religion except when denigrating them as terrorists, omitting that suddenly "irrelevant detail" when they make the news for other reasons. Legitimate, entirely politically correct criticism of individuals so easily transmutes into a witch hunt.

Here the bias of the media is abundantly evident. There's no good statistical reason to fear Muslim "terrorism" over any other kind of homicide, and indeed in the USA (see figure below) this is positively deranged. Now, at this point one may say, "but Muslim culture blah blah blah" or, "the Koran says blah blah blah", as people often do. OK. That's absolutely fine, but it's a completely different topic to whether or not one group is measurably safer than another. If you want to talk about preventing terrorism, there can't be any debate about that. In America you're far more likely to be shot but a non-Muslim nutcase; in Europe it's true that most recent terror attacks have been caused by Islamic extremists... but the number of dangerous extremists is such an insanely tiny fraction of the community it makes no sense to be scared of them as compared to any other ethnic or religious group. So I can understand why you'd be concerned, but that doesn't mean you're not just flat-out wrong.



For a final example let's look at the recent "Muslim ban" from a different perspective : not whether the ban is right or wrong, but how supporters and detractors view each other. In my opinion, the continuous depiction of the other side as "biased", as though at inevitably means they're just wrong and untrustworthy, is one of the most dangerous tools in political debate. As we've seen, being biased doesn't always mean you're wrong - so long as you understand that bias. But is there an unfair perspective at work here ? Is one side resorting to double standards, engaging in mass hysteria because Trump instigated a ban but completely ignoring Obama's earlier, similar restrictions ?


Let's make two simplifying assumptions here just for the sake of argument. Let's suppose, in defiance of the actual facts, that there was a good reason to be suspicious of Muslims but the case wasn't yet proven. Let's also suppose that Obama and Trump's bans were identical and explicitly targeted at Muslims, which is also factually wrong. Don't worry, well return to the actual bans shortly.

If you were to say, "President Obama's Muslim ban was bad because Obama is a bad person, but Trump's ban is good because Trump is a great man and he can do no wrong", then you are irredeemably biased and unprincipled. You are supporting a policy not based on that policy itself but on who enacts it. If, however, you were to say, "I support Obama overall, but the Muslim ban was an inexcusable failure. I campaigned against it and will do the same against Trump's ban." then you are not biased. You are judging the policy based on the policy itself, and while you may still be an Obama supporter you aren't trying to excuse one particular action you don't like.

The flip side of this is that you could be unbiased on the other side. You could say, "I didn't vote for Obama but I supported his Muslim ban because it was the right thing to do, and I support Trump in part because of this policy". That's not biased or unprincipled either. I personally wouldn't support your principles in this case, and would in fact strongly object to them, but I will acknowledge that you have them.

It's also important to recognise that there can be degrees of bias. You could say, "I think Obama is just trying to do the right thing and though this ban is a failure of principle, this might be a time when unusual measures are required." In this case you've partially abandoned your principles for the sake of the man : you're prepared to support a policy that you don't really approve of, but you're honest enough to admit that.

It's the difference between saying, "I support Trump so I support a ban on Muslims", and, "I support a ban on Muslims so that's why Trump has my support". The former is biased, the latter principled - even if we might not like those principles.

Or put it this way : the most extreme supporters of a group or an individual are groupies. They care about who's saying it, not what they're saying. Such people of course certainly do exist. The mistake I see being made constantly at the moment is to assume that everyone who supports a policy does so because they favour who's saying it, rather than being viewed as favouring who's saying it because they like the policy.
Of course the bans aren't quite like these simplifying assumptions. For starters it's apparently not a Muslim ban; while technically true this is a clear example of political bullshit since that's how it was unambiguously described in the election campaign. The bans also differ in their scale and execution : the Obama administration's limited to a single country versus Trump's seven (as opposed to the previous which visa waivers from seven countries were made more difficult to obtain, which is not the same as a ban at all); Trump's including green card holders who were already extensively checked for eligibility to reside in the US, but now, at the stroke of a pen, excluded.

But the crucial difference is the preliminary rhetoric to the ban. Obama never made a ban on Muslims a major part of his campaign policy, and was in fact well-known for speaking out against discrimination. He also didn't institute the ban as an executive order either, though he did fail to veto it. Only the most extreme Obamaphiles would attempt to defend the Obama ban while decrying Trump's; the rest of us should see it as a failure.

But what was seen as a failure of the old administration is being touted as a triumph of the new - Trump didn't merely allow the ban, he encouraged it, enacted it as an executive order, and promoted it with discriminatory rhetoric. So it is wholly unfair to accuse all but the most extreme liberals of bias or double standards here - of course people are going to react differently when discrimination is promoted as a success rather than (at most) an excusable failure. Compared to this, even the important difference of Trump's ban affecting green card holders looks relatively minor, as does its shoddy execution.

So no, it's not fair to accuse the left of bias against Trump in this case. First, the need for the ban is flatly contradicted by the facts - and there's nothing the slightest bit unfair about being biased towards the facts. That's a case of perfectly sensible bias, as we saw happens all the time in scientific analysis. Second, the bans were different in scale, execution, and most importantly of all in stated intent. The latter turns the issue from one of "things I don't like" into "things I'm morally opposed to" - and few would deny anyone the right to protest over things they have moral objections to.


Conclusion


We're not all just grumpy for no reason.
Bias, then, can mean different things in different contexts. In science it can simply be about how your sample was chosen. As long as you understand and report this, there's no malevolence here. Indeed you may well want your sample to be biased in order to correctly examine whatever you're interested in. You might not call it "biased" here though, but someone else, trying to use your data for some other purpose, probably would. Bias can be a relative state.

Outside science bias has much more negative connotations, as though the other researcher suspected you of sampling the data in a way designed to deliberately mislead. Or that you only try and defend something or someone because of some pre-existing preference : a prejudice. Currently the political scene feels like little more than each party and its supporters accusing the other of "bias" (and its next of kin, "double standards") no matter what the situation.

This is dangerous. We're no longer looking at why people don't like the opposite position, we simply assume that because they don't like it they must be wrong. Brexiteers seem to fall victim to this like no other. Not a single one has presented me with a credible argument for leaving : they just go and say, "you don't get to ignore the vote result just because you don't like it", as though I had no actual reasons for not liking it at all. Yes, I prefer one argument to another. I am not impartial. That does not automatically equate to me being "biased", in the unfair sense, or being wrong, because you can damn well be biased for and against the facts. We've degenerated into endless bias wars, never really attacking the actual arguments at all. So I like to remember a useful quote :


Ironically enough, I can't stand Richard Dawkins. I guess I must be biased.