Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website,

Saturday, 16 April 2016


Sometime late last summer I saw a job advert I was morally obligated to apply for. Astronomy Visualisation Specialist ? I am one already ! Experience with visualisation software and generic programming languages, e.g. Python ? The 11,000 lines of Python code for Blender that I wrote ought to tick that box and then some. New ways of visualising data ? Assisting astronomers cre... look, just visit my website. It's all there. All of it. Never have I seen a job description that felt so precisely tailored to me.

The application deadline was 1st October, though I submitted mine well before that. It's normal in astronomy for responses to take at least one month, sometimes two or even three. A few places - which are downright rude - never bother to respond. Still, by early December I was beginning to suspect that despite being objectively very, very qualified for the job, they must have given it to someone else. Well, it did say, "as soon as possible" on the job description.

They hadn't. I'm not sure if you'd call it an early Christmas present or not but I had a Skype interview on 18th December. Which was the day after I moved out of my flat in Prague (roomate left, couldn't possibly afford the place on my own) which had involved a week of hauling heavy suitcases back and forth to move my stuff to the institute. And it was the day after I got back to Cardiff, just to make things as frantic as possible.

Anyway it went well, but unfortunately it went well for everyone else as well. After a rather nervous Christmas, in early January I got en email saying that they'd go to a second stage round where they'd send us all a data set to visualise. Which they duly did a couple of weeks later.

It was actually quite a fun little project to work on, because the data set wasn't in a format I was familiar with. 3D data sets generally describe the density or temperature or whatever at different locations in space. The location is specified by 3 positions : x, y, z. Nothing very complicated about that.

And that's fine if, as is usually the case, your data set describes something that's roughly box-shaped. And by roughly I mean very roughly indeed, like this :

Simulation of a star-forming filamentary cloud, or something.
But this data set didn't use ordinary "Cartesian" coordinates, it used spherical polar coordinates. These aren't difficult either, but they may be unfamiliar. Instead of specifying 3 linear distances from the origin, they specify one distance and two angles :

Why in the world would you want to use such things ? Surely, it's more intuitive to think in terms of distances, not angles ! No, not always. There's a very simple everyday example that should help you understand : maps. With a street map, you could easily specify a position in Cartesian coordinates. You could say, for example, that Cardiff Castle is about 150m north and 25m west of the Revolution bar, if you thought that breaking into the castle on a Saturday night was somehow a good idea.

You could also specify how high the castle keep is, if it was vitally important to reach a precise level for some reason.
On a scale this small, the fact that the Earth is curved doesn't matter. You could hold out your arm and say, "go 50 metres in that direction" and no-one would have any difficulty. But if you said, "go 5,000 miles in that direction", anyone taking you literally would have ended up in space. Of course, they intuitively understand that you mean "along the surface of the Earth", not really, "in the path followed by a perfectly straight laser beam going in that direction". Unless they're a cat, of course.

This is why you don't give cats directions using laser pointers, because if it involves going into space then damn it that's what they'll do.
North, south, east and west are really just angles. Nothing very complicated about that : if you want to go to Australia you can say it's around 140 degrees east of Great Britain and 20 degrees south of the equator. Or you can give this in miles, it's the same thing.

We don't normally specify the distance from the centre r unless you're a mountaineer, pilot, miner, or deep sea diver. OK, you'd normally give distance from sea level rather than the centre of the Earth, but it's the same thing.
Or is it ? Well, not quite. 50 miles north or south is the same everywhere, unless you're so close to a pole you can't actually go that far. E.g. if you head in a northerly direction when you're 25 miles south of the north pole, after 25 miles you'll find yourself heading south. Much worse is the case of walking east or west if you're near the pole. You can't actually go east or west if you're standing on the pole itself, and if you're just a few steps away from the pole, then walking 25 miles east or west is going to involve walking around in a lot of circles until you get dizzy.

And at the north pole it will also involve discovering Santa's secret hideout or being eaten by a polar bear.
Using angles makes things a lot easier for cartographers. Line of longitude (east or west position) have constant angular separation, even though the physical distance between them varies (i.e. 1 degree involves walking a much larger distance at the equator than near the poles). And the mathematics to convert between the two is easy and precise, so if we have two lat-long positions we can easily compute how far we have to travel to get from one to the other - even if we're at weird positions like the poles.

In numerical simulations, polar coordinates have some other advantages, which are a bit more complicated. Imagine if you will a cat on a record player*. If you wanted to specify any point on the cat, you could give its x,y position. Or you could state its r,φ (pronounced "phi") coordinates instead. There's not really any advantage to either... unless the record player is turned on and it stars spinning.

*Conjecture : there is no scientific concept which cannot be explained with the right cat gif.

If that happens it's very much easier to specify how fast each point is moving in the φ direction. If you wanted to specify its velocity using x,y coordinates, you'd have to give two velocities - which is much less intuitive than saying, "it's spinning at such-and-such a speed". And anyway the velocities in the x,y directions are constantly changing and depend on distance from the centre of the record player, whereas the angular speed in the φ direction remains constant everywhere.

"The cat is spinning at 20 rpm (or 120 degrees per second)", vs, "the cat's x velocity is 1 m/s and its y velocity is 0.5 m/s, no wait now it's 0.6 m/s and 0.2 m/s, no wait it's changing again, aaaaaargh !"
Or to illustrate this slightly more scientifically :

When the green point is at the top or bottom of the circle, it has no velocity in the y-direction at all. Similarly, when it's at the extreme left or right, it has no velocity in the x-direction. But it always has a constant angular velocity.

So polar coordinates are much more useful for describing rotating discs. The problem is that of course for rendering images, pixels generally aren't in polar coordinates : we have to convert back to Cartesian. That's a problem when visualising simulations : just like trying to map the spherical Earth with a flat surface, you can get horrible distortions or lose detail if you're not careful.

Converting between polar and Cartesian coordinates is literally like trying to square the circle.
My first approach was to convert the data into the regular Cartesian system : knowing the r,θ,φ coordinates directly from the data, it was easy to convert to x,y,z. Which gave me this :

It's simulation of a protoplanetary disc.

... at which I make a brief interjection because at about that moment I received the following email, which I will keep anonymous because I'm not a total douchebag :
...and I do my first steps in scientific visualization. I am very interested in astronomy, though my main job till now was connected only with graphic design in university sector. I have some 3D modelling experience (several years ago me and my colleague made a fulldome video). Now I learn Blender and try to write my first scripts in Python. Slowly I become the idea, how everything works. Although the more I read, the more question I get...But it is normal I guess :)
Hopefully soon the quantity of my knowledges will transfer to their quality. About a month ago I got a chance to apply for a position as a specialist for astronomical visualization. My interview was quite successful and now we got a test task, which will probably define the proper candidate. I have already an idea, how to solve it. But it would be nice to find someone, who understand the materia, could evaluate my job and give me some practical advices. So I would like to ask you, if you had time and wish to answer some of my questions.
OK... one of the other candidates is asking me for help without even realising that I'm applying for the same job.

I decided the only safe course of action was to make absolutely no response whatsoever, so that's what I did.

Anyway the first result was not bad, but not great. The problem is that when you convert between coordinate systems there's no guarantee the Cartesian data set will be completely filled - especially at large radii. The polar coordinates tell you the centres of each data cell (pixel) and the density (or temperature or whatever) in that entire cell. But the centre of that cell only corresponds to the position of one particular pixel in Cartesian coordinates - it's not the same as checking every pixel in Cartesian coordinates and finding which polar cell they're in. The upshot is that you end up losing detail in the centre (where the polar cells are closer together than the Cartesian cells) and large blank areas at the edges (where the polar cells are further apart than the Cartesian cells).

To illustrate that, let's take another look at the comparison between polar and Cartesian grids. First in the very centre :

Every large square of the Cartesian grid contains multiple points from the polar grid. So multiple polar cells get reduced to a few Cartesian cells - detail is lost.
And now in the outskirts, shading every Cartesian cell that's intersected by at least one polar cell corner point :

Oh noes ! Not every Cartesian cell is filled ! And this only gets worse at larger radii.
That's not a hopeless problem. One nice feature when converting is that you're free to choose how many Cartesian pixels you want very easily, so you can optimise for a balance of detail in the centre vs. empty regions on the outskirts. In principle, you could then fill in the blanks based on the nearest pixel, or accurately determine the value for each Cartesian cell by working out which polar cell it corresponds to. Doable, but not easy - and certainly not doable in the space of an afternoon, which was the stated scope of the exercise.

There's a more fundamental problem : to you show all the detail in the central regions, you'll need a lot more cells in Cartesian coordinates than if you used polar. Large data sets can easily run into hundreds of millions of cells, which means hundreds of millions of pixels : ouch ! Wouldn't it be better if we could somehow have non-square pixels ? Then we could show the data in its original polar form, with no loss of detail and no need to have a single pixel more than we really needed.

It turns out that we can do just that in Blender. Consider a slice right through the centre of the protoplanetary discs. If we pretend that r and φ are really y and x, we get this :

Of course they're not really x and y at all, which means we're looking at something that's weirdly distorted. But we can correct for this. The method I came up with was to assign each pixel to a face in Blender (UV mapping). Then we can move each vertex (i.e. distort each face) to put it back where it would have been in polar coordinates.

Bingo - we can have our cake (original spherical polar coordinates) and eat it too (no need to convert to square pixels). Of course, the real data isn't just one slice - it's lots of slices, each of a constant angle θ. So what we have is a series of cones :

And if we show all the cones, we get this - which is a pretty convincing way to fake a volumetric render, with the gaps between the cones only becoming apparent at certain angles :


In the above, density controls bother temperature and opacity (transparency). But it doesn't have to be density. It could be, for example, this mysterious Q parameter which is apparently heat transport that I know nothing about except that it looks nice :


In principle, we could fill those gaps by switching to spheres when the viewing angle is through the cones. Actually, I started with spheres because I'd already tried this* - I only had the idea to use cones during a long and boring meeting. They look nice enough on their own, though to really get things perfect we'd need to combine the two.

* Spheres are much easier to do because there's no need for UV mapping - Blender can calculate the distortion to a sphere itself just fine, but not cones.


So having figured out this somewhat complicated process, some considerable time passed before I heard anything back. It felt like forever.

Eventually at the beginning of March - a full five months after the application deadline, I got the news that... I was invited to an on-site interview ! Which led me to an odd mix of glee and frustration.

Many wisecracks did ensue, of course. Maybe, said colleagues, they just wanted more data visualised as a sort of way of getting cheap labour. Maybe they hadn't rejected anyone from the first two rounds. Maybe the number of candidates was actually increasing at each stage.

Nonetheless, I want along to said interview about three weeks later and went at it hell for leather. I brought along both glass cubes, 3D glasses, a copy of the Discover magazine in which I nuked a potato, and I even organized most of my Blender files and scripts from the last 14 years. It would be a five year position with the possibility of a permanent contract at the end, with a salary far more... European than those of the Czech Republic. Worth fighting for, even if the "as soon as possible" phrase had long since rung hollow.

Getting to Heidelberg involved a 7 hour bus trip. It was a very nice bus and I had the whole lower compartment to myself, which was nice. With wi-fi. Heck, it was better than most British trains by a considerable margin. There wasn't much to look at on the trip, but I've always thought that it's far easier to make my own entertainment than it is to make my own legroom.

The only real event that happened was that there was a perilously short connection between the bus to Mannheim and the train to Heidelberg, so I ended up jumping on a plausible-looking train (German trains do not indicate the destination and route very clearly) more out of hope than expectation. Which resulted in a fairly tense 20 minutes until the train stopped at the correct destination. After hauling my really quite surprisingly heavy bags to my hotel, I had little enough time in the evening to do anything except a short walk. I didn't get to see any of the pretty parts of Heidelberg.

Of course I did see the Haus der Astronomie the next morning. My interview went as well as it could have gone. With hindsight I wouldn't have changed a damn thing. There wasn't time to show everything, but I left fully satisfied that I'd done as much as I could possibly have done. I answered all their questions. I showed them the extremely heavy data cubes and 3D movies. I was a little peeved that - bizarrely - they really did want someone to start extremely soon, but I'd have been more than prepared to take the job anyway (even though I'd really, really, really like an extended period back in the UK). So off I went back to Prague while they interviewed the other candidate.

Alas the return trip was not without incident. About one hour in to the seven hour trip, the bus ground to a halt due to an accident up ahead. It didn't move an inch for the next four hours. The bus driver let everyone off to walk around (and even some people on to use the toilet). I talked to some otherwise politically like-minded Germans who were, it must be said, none too fond of the Czechs, but even the nicest bus becomes wearying after 11 hours. I eventually crawled back into my room - still dragging my laptop and heavy glass cubes, of course, at about 2:30am.

Then I proceeded to play the waiting game. Again. For another three weeks. Until finally :
Dear Rhys,
thank you for your patience. It was a difficult decision, but in the end we offered the job to the other remaining candidate. This is in no way a reflection on the skills you demonstrated - we were very much impressed by your visualizations, found that you communicated well with the astronomers who would have been your colleagues here, and you would have been a great addition to our team. In the end, it came down to experience - the candidate to whom we offered, and who has now accepted, the job, is older than you and had put those additional years to good use in the visualization field.
We wish you all the best for your future career - you have an unusual and interesting mix of skills, and I hope you will find a good place to put them to the best possible use, either in astronomy or beyond!
It's a very nice rejection letter, but a rejection all the same. All of that effort had been for absolutely nothing except a free but incredibly exhausting 24 hours in Heidelberg. Was it worth it ? I'll let John Cleese answer that. Skip to 46:28 if it doesn't automatically.

On the one hand, the successful applicant is about 10 years older than me, done four-dimensional relativistic raytracing calculations, and has written a freakin' book. Fair enough, I'd have hired him instead of me. On the other hand, it doesn't take six months to decide who's more experienced. You can do that from the start. Especially if you use the words, "as soon as possible" in the advertisement.

Oh well. As Captain Picard once eloquently put it, "shit happens". In another reality, alternate me is taking a short break in Cardiff before preparing to move to yet another country despite never wanting to leave home in the first place. Actual me is now in the more mundane process of searching for a flat before he gets kicked out of the institute's accommodation. Aaaargh.

Saturday, 9 April 2016

Perfectly Wrong : Or, Necessary But Not Sufficient

Right now the popular science media is all a-flutter with "news" of the "discovery" of "Planet 9". This post isn't about that... at least, not directly. What I want to do here is sound a cautionary note using a surprisingly relevant tale from the normally unrelated field of extragalactic astronomy. There are, however, three things I must say about Planet 9 before we begin :
  • It's. Not. A. Discovery. It's an inference. You can't discover something unless you see it. Oh, you can have extremely good evidence for it, to the point where the actual detection becomes almost a formality, but we're nowhere even remotely near to that stage of credibility.
  • The attitude of the scientists. Their stridingly-confident approach is the very last thing one should see coming from someone at the forefront of research. Why ? Because science is a process of getting things wrong, and this whole attitude of, "this thing we just thought of is right but all the other claims of more planets were wrong" is just aaaaaargh.
  • It would be Planet X anyway because Pluto is a planet, deal with it. Or possibly not because who knows if it's cleared its orbit or not ? No-one, that's who.
But on to our feature presentation. It's the fact that Planety McPlanetface hasn't been discovered that I want to emphasise.

Have a look at this particularly lovely pair... of galaxies, that is. You do your own joke.

NGC 4438 (the big one) and NGC 4435 (the little one) as seen in the Sloan Digital Sky Survey.
The correct first response is "Oooooh !". The correct second response is, "well how did that happen ?".

Being such a big, bright, and downright spectacular object, Markarian's Eyes (as they are sometimes known) have attracted an awful lot of attention from magpie-like astronomers over the years. And rightly so, because aside from observational astronomers naturally liking pretty pictures as much as anyone else (maybe even more so), this system is weird even by the standards of other cosmic train wrecks.

As early as 1988, computer models were being used to try and explain this bizarre-looking duo. Sensibly enough, Francoise Combes et al. carefully combined observations and simulations, rather than doing just one or the other as is often the case. Basically what they did was model a collision between the two galaxies. Which also seems thoroughly sensible because there are quite clearly two galaxies involved in the collision.

A little background might be beneficial here. In some ways, the very broad properties of the system are easy to explain. We know that some galaxies contain gas (which is necessary for star formation), whereas others (for a host of interesting reasons) do not. We also know that massive stars are blue and don't live very long, so we expect regions of gas which is actively forming stars to look blue. So, if one of the two galaxies had gas and the other didn't, we could certainly expect that a collision would cause lots of blue areas to develop in one of them while leaving the other one much more nonchalant about the whole thing.

Which is of course exactly what we see. The problem is one of detail : can we explain those precise structures that are actually observed ? From the Combes et al. simulations it seems the answer is yes, yes we can. Here's their simulation on the left compared to the observations again on the right.

Pretty darn good ! OK, it's not a perfect agreement : the smaller galaxy is a little bit too far to the north in the simulation, the southern tail is straight in the simulation but curves in reality, and the tip of the northern tail is a simple angular point in the simulation but looks distinctly "pinched" in the observations. Yet the broad agreement is very impressive.

One should also remember that this was 1988, when computers had all the raw power more normally associated with a rampaging snail or a high-powered sloth. The simulations were necessarily very primitive compared to what's possible today. They didn't (couldn't) model any of the gas physics or the star formation or anything like that. All they did was to have two gravitational fields for the two galaxies and a bunch of test particles (that is, particles which don't have their own gravity) representing the stars. So they could predict where most of the mass should be, but that's about it.

But don't underestimate the achievement of the result. To explore the enormous parameter space of two colliding galaxies (which could have a huge range of initial positions and relative velocities) and get something that resembles the observations this closely is, frankly, heroic. Which means according to Blackadder we all have to stand like this in celebration : 

Here we get a very pertinent example of Occam's Razor. It seems that we can explain the system very well indeed using nothing more than the simplest of physics. If we were to use the Razor as is popularly supposed (the simplest explanation is usually the correct one) we'd get the absurd result that we should ignore all the other physical processes because they clearly just can't be very important. That's why the Razor needs to be handled as carefully as any other sharp pointy object.

Fortunately real scientists are not movie scientists and don't assume the simplest explanation is the correct one, which isn't what Occam said anyway. They keep investigating. So in 2005, Vollmer et al. (the et al. includes the heroic Francoise Combes, by the way) did more sophisticated modelling of the system - though still with the basic approach of smashing two galaxies together.

Although they were still limited in some ways (no star formation and only simple approximations for the dark matter), they were now able to model the gas and stars separately and include the effects of the the intracluster medium on the gas. Even intergalactic space is not totally empty, and as galaxies move through the particularly dense gas that pervades galaxy clusters, their own gas can get pushed out. Whether or not a model without that intracluster medium works or not is irrelavent : it's there regardless.

Terribly famous image showing the gas in various galaxies (size greatly exaggerated) in the Virgo cluster along with the gas in the cluster itself. 
The result of this more sophisticated modelling does not, on first glance, give a result which is a dramatic improvement over the earlier model. Better, yes, but not massively so. That changes somewhat if you read the details of the gas distribution (not shown here) - there are certain details, the authors found, that just can't be explained without including this intracluster gas.

Unfortunately most of the figures given in the paper are rather small. The only decent one only shows the stars in the larger galaxy, for some reason. However, some rather better movies of the simulations are available here.
Still, the fact that this is an incremental improvement bodes well. The basic structures can be reproduced very well with this model of two colliding galaxies, and even more precise details can be re-created when you include the hot gas outside the galxies. Surely, this means the model must be basically right - two colliding galaxies and some extra gas are all that's needed. Right ?


Recently I described how the scientific method is an incredibly messy process, especially at the forefront of research (though things are rather different if you limit yourself to the least controversial areas). One comment received was, " looks as if "Observation" is the start of the scientific method which yields results for peer review."

Yes. That's exactly what I very deliberately wanted to say. Because sometimes, as I shall now show in spectacular fashion, observations tell you something you wouldn't have had a hope in hell of guessing if you limited yourself to the classical theory -> prediction -> observation approach. For when this area of the Virgo cluster was observed at a wavelength sensitive to hot gas of a particular temperature, Kenney et al. 2008 found this :

Oh, crap.

Here we see the new observations overlaid in red and green (depending on the precise velocity of the gas, but that's not important here) on a standard visible-light colour image. Instead of just the two galaxies colliding, it's now virtually certain that they've also interacted with a third, much larger galaxy. All those carefully fine-tuned simulations which reproduced the precise structures very well... nope. The lesson should be powerfully self-evident : even if your model does a fantastic job of reproducing even the very fine details very well, it can still be wrong - or at least woefully incomplete.

Which is, of course, not to say that the Combes and Vollmer models are definitely totally wrong. It's still possible that NGC 4435 and 4438 are interacting, and maybe that is still the main mechanism for the formation of the weird-looking structures. But it's also abundantly clear that that is, at the very least, not the whole story. If the third galaxy was just a tiddler then this likely would be a case of slightly modifying the original suggestion. But, since the third galaxy is enormous, it's also even possible that the major structures formed by a totally different mechanism and perhaps the close proximity of the original two galaxies is just a coincidence. This is commonly known as the "more than one way to skin a cat" principle.

I trust this makes the point about Planety McPlanetface clear. Even if they do have a really good model that explains the observations - and I'm not convinced that's the case - that doesn't mean jack without an observation. You have to be very, very careful to distinguish between, "this model works" and, "this model is correct".

I wouldn't often presume to correct Feynman on the scientific method (though there are other aspects of Feynman that I would have few scruples about attacking) but this is not quite the whole story. It's a minor but important detail, but you've got to be damn sure it really does disagree with experiment (or observation) before you say it's definitively wrong. Given enough research, this is possible. But, as a general rule of thumb, it is foolhardy indeed to reject or accept anything at the forefront of research even if they have seemingly good observational evidence... and positively deranged to accept an idea on the basis of a model alone. 

Now, not for one moment would I dare to suggest that I have any authority to pronounce judgement on Planety McPlanetface* - just that I'm prepared to bet against its existence. I have zero expertise in Solar System dynamics. Someone could potentially find it tomorrow and prove me wrong - but until that happens, I don't find the model-that-fits-the-facts to be anything more than an intriguing possibility, and the barrage of speculation is just hopelessly overblown. 

* Though I insist upon naming rights.

The take-home message from this post is simple. If it really does disagree with observation, it's wrong. But the reverse situation does not follow. If a model really does agree with observation, it isn't necessarily right - no matter how specific its predictions are.

Sunday, 3 April 2016

Fifty Shades Of Science

The scientific method taught in schools is something like this :

This is much too simple unless you're 12 and trying to make a lemon-powered clock or something. The internet throws up lots of variations on this theme, some of which are better than others. Having a penchant for silly internet memes, my favourite is this one :

The key difference is that this is a loop, not a line. Which is very much better, because it's extremely rare for actual research to ever produce a truly decisive yes/no answer. But just how messy does this get ? This meme is a very nice simplification, but I think we can do better.

Well, one step that it's missing is peer review (or more generally, discussions). You have to communicate both the experimental procedure, the results, and your interpretation to at the absolute least one other recognised expert in the same field*. No-one is smart enough to spot all of their mistakes, nor is anyone trusted enough to do so. Peer review is very far from perfect, but it's better than not doing it at all.

* Some would argue that without peer review you're not even doing true science and that you can't even have individual scientists. That's much too extreme, but it is an essential part of modern scientific practise and not a peculiarity of academia. 

Here things immediately get messy because there are are several things which can happen :
  1. The paper is accepted. This basically never happens because referees like to make themselves feel important.
  2. The paper requires modifications but is basically OK. This can be as minor as fixing the typos or as major as re-doing the analysis and changing the conclusions.
  3. The paper is rejected. This does happen, sometimes for good reasons (i.e. the authors have made fundamental mistakes) but also sometimes for very bad ones (i.e. the reviewer has made fundamental mistakes, or is just an asshole). In this case the authors have two options :
  • Capitulate. Ideally this only happens if the authors have made a terrible mistake and realise it.
  • Fight back. Ideally this only happens if the reviewer has made a terrible mistake and the author's realise it. This can involve appealing to the journal editor, finding a different referee, or even submitting to a different journal. In the worst case, the authors may resort to institutionalised bribery to get their work published in a disreputable journal. 
OK, let's try and re-work the meme to account for this :

It's still a loop, obviously, because peer review isn't perfect. Think of it only as a filter that tries to keep out the worst garbage, but occasionally lets through the odd dead rat and holds back the occasional diamond. Having the wider community examine the results means that if something is wrong, it does get spotted eventually. The whole loop is a form of peer review in itself. In any case, it's just vanishingly rare ever to get a decisive, complete answer. You do the best you can, publish the paper, then build on that.

I've also altered some of the arrows. You can go straight to "reject hypothesis" from "results" without ever publishing them. This is not necessarily good practise, but it does happen - partly because publishing every single bad idea just isn't practical. And once you "reject hypothesis" you might just give up entirely, so you start over. Or you can end up in an infinite loop with a reviewer who will keep telling you to make changes.

Then there's the whole issue of interpretation, which the original meme neglects. One of the hardest lessons I've had to learn from the process of doing research is that instead of confirming or denying my original hypothesis - assuming I had one at all, but we'll get to that later - is that most of the time the results do neither. They imply something completely different instead : usually a case of "not even wrong" for the original idea. They may or may not actually tell me something (else) interesting about the Universe if I think hard enough.

You can't usually just go straight from "results" to theory or rejection as in the original meme. Often, the results are so different that they set you on a new line of inquiry altogether, and it's very important to distinguish this from the more straightforward process of testing your original hypothesis. And sometimes you realise that the whole line of inquiry was genuinely pointless. So it's probably something a bit more like this :

It's starting to look rather more complicated, but we're not done yet.

"Theory" is a difficult term. The internet has it that scientists use it in a very special way to mean an extremely well-tested model. And we do... sometimes. But we also use it in the everyday speech in exactly the same way that everyone else does - any sort of model, no matter if it's been tested at all. In fact we often explicitly say, "very well-tested" just to make sure everyone's on the same page. There's no rigorous definition of exactly what "very well-tested" actually means in practise anyway. But I shall keep the label "theory" to distinguish it from "hypothesis", though the impression one gets from the loop that withstanding a single test is sufficient to call it a "theory" is not really the case at all.

Or a theory.
What theory does usually mean is a specific mathematical model. It's often driven in part by observations, but sometimes it can be refined using pure mathematics. So we can skip directly from theory to prediction (via some mathematical refinement), on occasion. Relativity is a good example as it made many predictions that could be derived from its pure mathematical basis which Einstein hadn't originally envisaged, e.g. black holes.

Sometimes we can also skip the need for additional experimentation - there's a very fuzzy line indeed between "prediction" and "explanation". Ideally a theory should predict something that hasn't been observed before and a new experiment or observation is run to test it. But if someone realises that a theory also explains an old result, that counts in its favour too.

Although I'm keeping the label "theory", "experiment" has to be changed. Experiment suggests carefully controlled conditions which can be manipulated to the scientist's whims. For observational astronomy this is utterly impossible - we can't influence a distant galaxy in any way, although sometimes we might like to. At least we can plan which object we want to look at and how we want to study it. Palaeontologists don't even have that luxury, some of the time.

You'll have noticed that First World Problems lady indicates the process can have a definite end, indicating that you have to start from something fresh. You don't always begin with something in the loop either. Sometimes you can literally just dream up something to test or observe. Many results also come out of large surveys where the result was utterly unknowable before the survey began - it couldn't have even been specified as a goal, because there were too many unknown unknowns. So you don't necessarily do an observation to test a hypothesis at all. Which brings us to version three :

Oh heck, this is getting complicated. Which is good, because that's what real science is like. But there's one absolutely massive thing we've left out, that was in the very first kiddy version but not the later ones : background reading ! Oh noes ! You're utterly doomed if you charge in where angels fear to tread without doing background research first. If you have a really good idea, the chances are that someone else has already tried it long ago.

Background reading is a hugely important part of the whole process. It can generate new ideas or instantly falsify your hypothesis. So we've got to link this to several different places. Fortunately, this is about as refined as we should try and make the meme, so let's also tidy things up a bit and stamp my website on it in the vainglorious hope of generating more hits.

Full size image here.
Of course, we could go further, but the essential points have probably been made. It only remains to say that each individual part of the process can, in reality, be extremely involved. If you're lucky, observations of what you need already exist. If not, you'll have to submit an observing proposal - which takes months and isn't guaranteed - or even build your own instrument, which takes years. Background reading is a serious chore because authors (and referees) all too often insist on making papers unreadable. I could go on, but as I said, the point has been made.

EDIT : For those sadistic enough to care, here are some further modifications that would improve things still further :
- As pointed out in the comments, "Reject hypothesis" should lead to another "Interpretation" that leads either to "Theory still valid" (Captain Picard Full of Win meme) or a new "Reject theory" (probably also Picard Full of Win, or some such). "Theory still valid" would then lead to "Theory" (because you can just consult the original theory again and see if there's something you missed), "Observation", "Question", and "Hypothesis".
- "Background reading" needs to have a direct two-way link to "Idea". "Let's do math" should have a two-way link to "Hypothesis" and a direct one-way link to "Prediction". And there should be a whole lot more "Background reading" scattered liberally about the place, at the very least one going directly to "Theory".
I'll wait a while for further comments and then consider making some edits.

Conclusion : Fifty Shades Of Science

You might be wondering how anyone could follow a chart like that in practise. Of course, they don't - it's just too complicated. My intent here is not to produce a definitive "how to do science" guide, because that's impossible. Nor do I wish to undermine the original meme I based this on, because the simplified version has its uses. Rather I want to emphasise that science is not always such a rigid, fixed process. You adapt it to fit the purpose.

Many scientists do a little of everything at some point in their careers, but often with a strong bias in one direction or another. Some observers are happy to do nothing but examine and catalogue, rarely constructing hypothesis or mathematical models. Many theorists see observations as dirty, vulgar things, and observers as nothing more than glorified photographers, whereas observers see theorists as geeks who spend all their time playing computer games.

I've made my choice.
Which means that we don't always have to go through every step in the chart. Sometimes we can just do : observe -> interpret -> publish, and that "interpret" stage can even be kept to a bare minimum. That's a perfectly valid way of doing science, every bit as much as the idea of hypothesis testing is. Coming up with an idea to explain things is great, but it's not the be-all and end-all of the scientific method.

Some people become uber-specialised, who know a single instrument or code inside out and backwards, but take them out of their comfort zone and they collapse. Others know a little of everything. A few people do go through the entire processes, but hardly anyone at all goes through the entire loop every time. This is another reason why science is a fundamentally collaborative endeavour - even if you don't like talking to people, you benefit from their findings and they from yours. This collaboration knits all the different techniques together, so that overall we end up with (or rather, hope to end up with) a system that's much more powerful than the sum of its parts.

I've alluded to this already but it's worth emphasising : not all investigations start as some part of this larger loop. Sometimes whole new lines of inquiry come from sheer blind luck. And I simply cannot resist quoting the report of one referee on a telescope observing proposal who apparently just did not understand this at all :
 Yet another HI galaxy survey which wants to go deeper and open up new parameter space in the Virgo cluster. A complete census of HI in galaxies in Virgo and perhaps HI streams and other clouds is promised. The scientific goals are reasonable enough, I suppose. But the real question is the investment in telescope time. You can always find new things if you spend enough time observing.
Apparently, making new discoveries is a bad thing ! Yes, we can always make new and unexpected discoveries with enough observing time - which is a very good reason to approve of large surveys, not dismiss them. But I digress.

The main point I want to make is that science is a very different process to the one taught in high schools. As I've written before, it has a lot more similarities to the humanities and particularly the arts than is often appreciated. Yes, it deals in hard facts. But the interpretation of those facts, when it comes to front-line research, is every bit as subjective as the beauty of a Shakespearean sonnet.

The difference is that science makes testable predictions, and it has that all-important "reject hypothesis" scenario. This is no small difference - but the similarities matter too. It's not a black-and-white case of "scientists baffled" versus "mystery solved", whatever the popular media might say. Which matters a great deal, because if you see scientists continually getting things wrong without understanding why that that's integral to the process, of course you'll see them as untrustworthy idiots. Getting children to do experiments is one thing. Getting them to understand that there might not actually be a right answer at all -  just the best answer that's possible given the available data - is quite another.

Thursday, 31 March 2016

Not How The Heavens Go

From ClickHole, so chances are she never said that.
Atheism is a subject about which I have Views. If you've even glanced at the sheer length of the post in the link, you might be wondering what the hell more I could possibly have to say on the subject. Well, there is one aspect which I didn't really cover : the supposed conflict between science and religion.

If you take everything literally, then there's a massive conflict between science and religion. The Earth isn't flat, it wasn't constructed in six days, it certainly isn't 6,000 years old, species not even mentioned in religious texts have gone extinct, and there's no real evidence for the soul or an afterlife. The two systems of thought seem like fire and ice : the one cannot permit the other. One takes its knowledge unquestioningly from ancient texts, the other from empirical evidence. Surely the two are completely mutually exclusive !

The problem is that this is a very narrow and simplistic view of both science and religion. True, there are some things that science has established with near-as-dammit certainty and some religious followers dismiss these findings. There is a conflict in some cases. The difficulty here is that you can prove whatever you want to prove with extreme examples, which is why Godwin's Law is a pretty sensible one. But not all religious people are thoughtless minions, and not all scientists are paragons of objectivity. Far from it.

Believing In A Deity Does Not Automatically Make You Unable To Think Rationally

Literally, like, a shitton of scientists and rational thinkers throughout history have also happened to believe in deities. There was Socrates, who loudly and proudly declared that he heard a voice in his head which told him what not to do, and violently declared himself a theist. He also said that the wisest man was the one most aware of his own ignorance, that self-examination was "really the very best thing that a man can do", that wealth does not bring goodness, and that the good of his fellow citizens was more important than his own life. All in all, he was just about the extreme opposite of Ralph Wiggum.

I'm pretty sure no-one's made that comparison before.

Yes, mystical voices can and do tell people to do ridiculous, dangerous things. But consider the possibility that they can also tell people to do entirely sensible things. Doesn't really matter where the voices are coming from.
Then there are actual scientists. I've covered examples of these in the Muslim and Christian worlds before. The pagan Greeks are littered with examples : Anaxagoras, who realised that the Moon shines by reflected light, but also thought the Earth was flat), Leucippus (who came up with the idea of atoms and vacuum), Aristotle (who got just about everything wrong and set back science for about the next thousand years, but at least attempted to analyse things rationally, bless his little cotton socks), Archimedes (a mathematician famous for taking baths and who sunk a Roman battle fleet as a hobby), Eratosthenes (who first measured the circumference of the Earth), to name but a few.

Not that the ancient world was a lost Utopia when science and religion were besties until those nasty Christians (read that link) came along. There were occasional conflicts : Socrates was accused of atheism, which smells like a trumped-up charge, while Anaxagoras was actually sentenced to death for his impiety (specifically the idea that the Sun is a burning rock). He escaped by going to another Greek city, but since there's no particular reason to assume that Lampsacus was a hotbed of atheistic freedoms, it's entirely possible the charges were political. Clearly though there was some conflict, because otherwise the charges could not have been used at all. But it's hardly as though everyone was at each other's throats the whole time. Maybe, overall, it was more like this :

Still, there's also no reason to think that many of those early scientists were secretly atheists. Some were, it's true - but many were not. Anaximander, who has been called the "Father of Cosmology", was certainly influenced by his faith, while Pythagoras' belief in the soul didn't stop him from coming up with a famous equation about triangles. Although it's hard to be certain, it's probable that at least some of these early thinkers were directly motivated by their faith, rather than seeing any kind of conflict between the two.

Arguably, the Celtic world provides even stronger evidence of faith motivating science - albeit more circumstantially. Celtic culture is awash with ritual and superstition, with at least some archaeologists ascribing every petty action to religious beliefs. Yet they were also undoubtedly capable of rational thought and precise astronomical measurements - the alignments of Stongehenge and other neolithic monuments, and more controversially the Coligny Calendar, makes it clear that these people were neither stupid nor devoid of spirituality. Similar examples can be found in ancient EgyptBabylonia, and Mesoamerica (the latter having some of the most bloodthirsty cultures and religions of all time).

Of course, we don't really know if the ancient cultures were really practising anything like modern science. It's entirely possible that they merely wanted precision measurements of astronomical events for ritualistic purposes, and never sought rational explanations for any phenomena. Accurate measurements are an important first step, but we shouldn't get too carried away.

It's to the medieval world, where far more documentation has survived, where we must go to find really clear examples of individual scientists who would be baffled by the modern idea that religion means surrendering all curiosity. To re-iterate, Cardinal Nicholas of Cusa postulated that the Universe should be infinite - not for any rational reason, but purely because he thought it would make God seem even better. Brahe, Kepler, and later Newton were all Christians. And for all his conflict with the Church, Galileo doesn't seem to have had any problems with religion at all.

Ours Is Not To Reason Why

It's the medieval theologians who exemplify why there needn't be a conflict between science and faith. One might think that if one says, "God did it", then one needs no further answers. One might think that this crushes one's curiosity, and that religion is inherently opposed to scientific inquiry. Maybe, one might think, I've only been citing extreme, unusual examples thus far. One should stop talking about oneself in the third person, for starters, and anyway one would be quite wrong. And importantly, my point is only that science and religion aren't always in conflict - whether this is true in general is quite another matter.

The medieval world had an elegant answer that allowed them to have their cake and eat it. God, they said, was indeed the primary cause of all things. But he didn't meddle in the affairs of men directly : he'd invoke some secondary action to do whatever it was he wanted. These secondary effects (plague, lightning, volcanoes, good weather, etc.) could all obey strict physical laws. Make no mistake : you couldn't cheat God. Those secondary effects always did exactly what God wanted - so if you survived a lightning strike, God only wanted you to be taught a lesson.

This meant there was tremendous freedom for the medieval mind to examine how the world worked. Taken to extremes, God could be seen as the reason for all things, but not necessarily the direct cause. It's the difference between asking the questions why and how. That completely avoids the whole "God of the Gaps" problem - the idea that God is always the cause of things we don't understand, which are then invariably revealed by scientific inquiry to require no direct supernatural intervention at all.

It's a bit like saying, "I've got some money in my wallet because I'm going to buy myself a pet porcupine" versus, "I've got some money in my wallet because I've just come from the bank". One tells you the mechanism by which the money found itself in your possession, the other tells you its purpose. Both are true. If someone asked you, "how did you get that money ?" and you said, "because I want a pet porcupine", they'd rightfully shuffle away nervously. But if you said, "because I work hard and have a savings account", that would be acceptable. How and why are sometimes completely different questions.

I promise to feed him every day !
Such an approach diminishes neither science nor religion. Indeed by keeping the two so clearly separated it strengthens both of them. Religious texts have no business in the science classroom, and science texts have no business in theology lessons. By abandoning all physical evidence of God, religion arguably requires an even bigger leap of faith.

Not that this approach is without limits, of course - there's only so much a porcupine is good for. It solves nothing about which is the "right" faith, if such a thing is possible. Nor does it answer anything about why bad things happen. You might still very well ask why God has apparently designed a Universe so that is manifestly unsuitable for us. And that would be a tough question indeed for theologians. Maybe God isn't even a designer, I don't know - I'm not a theist.

Of course, you can get a continuous spectrum of ideas in this approach : from a fat lazy God who exists but does bugger all, to God having direct control over every atom in existence. It's really only toward the extreme "total control" end that science and religion start to unfriend each other on Facebook.

Not Believing In A Deity Does Not Automatically Make You Rational Or A Scientist And It Certainly Doesn't Automatically Make You A Nice Person

Being religious clearly does not equate to being stupid or irrational. But we should also look at the opposite case. Many of those arguing most passionately that religion holds science back are not scientists themselves, yet they seem convinced that they are more rational because they believe deities don't exist.

That is probably the most dangerous fallacy of all. There are all kinds of secular ideologies that can lead to barbarism, not least of which is communism. The Khmer Rouge were explicitly anti-religious and anti-intellecutal, and committed some of the worst atrocities of the 20th century. But then, communism in general hasn't really enjoyed a great reputation for creating happy societies. While Marx's views on religion may have been rather more sophisticated, making people into atheists is nowhere near enough to make people better.

Absolutely none of which says that atheists are better or worse as scientists, of course. The point is that if you say, "religious people are bad, they do all these irrational things and stand in the way of science", you are so far wrong it's not even funny. Atheists are just as capable of being stupid as anyone else. It makes exactly as much sense to say, "Stalin was a nasty man, therefore all atheists are evil" as it does to say, "the Crusades weren't very nice, so all religious people are jerks." Citing examples of atheist scientists does not mean that atheism is either better or inherently more rational.

Now, just to be fish-slap-in-the-face clear, if you're thinking that I'm somehow implying that atheism makes you worse, you need a good spanking. Because I'm not - religion doesn't have a monopoly on morality. I am saying that the human condition - our ability to get along with people we disagree with, to build particle accelerators, to massacre people by the million or sacrifice ourselves for others - is infinitely more complex than whether or not someone believes in a deity or thinks the Bible is a good read.

Someone's been a bad atheist.
Just to continue to rub the point home, it's always worth remembering that even some of the very best scientists are capable of stupid mistakes. Hoyle continued to believe in the Steady State long after it was utterly discredited. Einstein, conversely, fudged his equations to make a Steady State possible. He was also no fan of quantum mechanics, despite being one of its founders. Only two years before Einstein's first paper on relativity, Michelson was proclaiming that all the underlying principles of physics had been solved !

And some self-proclaimed atheists really aren't anything of the sort. Instead of supernatural deities, they believe in aliens, the Illuminati, the Freemasons, and a host of other powerful entities beyond their control and utterly lacking in sensible evidence. Of course, their faith is utterly different from religious faith, because they have real proof, sheeple...

There's something deep in the human psyche that demands to control other people, or insists that it's under the control of something else. We seem to desperately want to believe that someone is actively "in charge", even if we don't want to think that's a supernatural entity. A willingness to accept one's purposelessness in the face of Creation appears to be rather rare - even those who insist there are no higher powers so often insist that everyone else must accept this "fact" whether they want to or not (let alone whether it would actually make them better people). People are, in short, complicated. I don't know why this is such a hard concept for some people to grasp. You just can't reduce people to their spiritual beliefs.


Religion and science clearly don't have to be in conflict, but sometimes they undeniably are. Those who think that ancient tomes or voices in their head or magical leprechauns can tell them how the world works even when hard evidence says otherwise are likely destined to be cheerleading the protests against teaching evolution, anti-vaccines, insisting the Earth is flat, and that sort of thing. Which is a pretty awful sort of cheerleading, really.

Except... this article claims that there isn't really a "war on science" at all. It quite correctly notes that those opposed to even very robust scientific findings often try and use other scientific results to dispute them. Judging by the responses when I posted this on Google+, I have to say there's probably something in that. While there was a response by a Creationist nutter, there were also responses by two intensely rational people who are skeptical of mainstream scientific ideas. One is sympathetic to UFOs. Another is skeptical of climate change. Both are extremely intelligent human beings, so I hope they managed to bury the hatchet.

That said, if you dispute the findings of science you're being scientific. But if you insist that a conclusion must be wrong because you don't like it - not because science actually says so - then even if you manipulate other science to show that it's wrong, you're not being scientific. If you're not actually declaring war, you're at least perverting the course of science.

Another recent article - which motivated this post as I found it somewhat unsatisfying - has the interesting note that surveys may be "creating Creationists". That is, if your survey questions are over-simplified, you'll miss important nuances about what people really believe, and conclude that things are considerably more black and white than they are :
One 2006 poll conducted by the BBC, for example, asked respondents to say if they believed in atheistic evolution, creationism or intelligent design theory. No option was offered for those believing in God as well as accepting evolution. In this way, such surveys effectively “create creationists” in the way they frame their questions... The problem with this poll is that it tends to imply all people have clear and internally coherent views on the subject.
Similarly, people sometimes say that you can't pick and choose which bits of a religion you want to believe. That is complete nonsense. People do this all the time whether you think they can or not. To take an extreme example, my grandmother called herself a Christian but didn't believe in the afterlife.

And yet while that particular example may be rather silly (it's literally true though, I'm not exaggerating), there's a virtue in picking and choosing. It demonstrates that religious followers are not all blindly unquestioning sheep. Far more complex examples can be found in theology, which would be completely unnecessary if everyone took their religious texts literally. There may be comfort in blind obedience to a set text, but there's no safety - for yourself or anyone else. It would indeed be an absolutely ghoulish world if people followed their religious books to the letter, so why on Earth are you trying to make people do this ? Fortunately, they don't - which, incidentally, also means that it's pointless to judge people by what their books say.

Although occasionally judging them by their hair is permitted.
Galileo was pretty close to the whole notion of God being the reason for the world, not the mechanism, when he said (slightly paraphrasing), "the Bible teaches us how to go to heaven, not how the heavens go". More than a thousand years earlier, even St Augustine realised that the Bible shouldn't be taken literally in all things. Modern thinkers too have espoused the view that contradictions in the Bible indicate it's not supposed to be a history book. And yet devout atheists continue to act as though all Christians - and by extension all theists - have this utterly ridiculous, uber-simplistic view of the world.

I suppose it may be nice to think that other people are worse as a way of making yourself feel better, but deep down I think we all know who the real enemies are : Nickleback fans. Obviously.

I mean seriously WTF is wrong with these "people".
Science and religion have a far more complicated history than is generally taught, and a far more complex relationship today than is often assumed. Yes, at times there have been some gruelling clashes. And today there are certain extremists who take their holy writ as being literally true, even though this was known to be stupid thousands of years ago. The mistake, however, is to label all religious people in some sort of big homogeneous group. This is as wrong as the European Southern Observatory's repeated claim to be building the world's largest telescope. Err, yeah, optical telescope, but omitting the world "optical" is not a simplification. It's just plain wrong.

Atheists, I totally get your anger at religious fanatics. I'll back you all the way on that. But all religious people ? Nope. Nope nope nope nope nope. Conflict can occur not just because of religious fanatics, but because of scientific fanatics too : my way of looking at the Universe is the only valid one, only scientific knowledge is true. Anything unmeasurable isn't real. Most of the time, there's just no need for this starkly black-and-white view of the world.

When it comes to those who say, "God does everything", the atheists are correct. It's when they step outside the remit of science and try and say that all notions of divinity are definitely wrong and damaging that I have a problem.
Irrationality requires far more than a mere belief in a deity. It requires a subscription to a much larger and more broad-ranging field of thought. You have to surrender your own reasoning and outsource your knowledge to someone or something else, be that a religious textbook or a scientific one. Religion can do this. So can other ideologies. You can be a devout acolyte of science who takes the half-mad theories of some scientists as gospel. But it is not the slightest bit necessary that this happens. As Jesuit astronomer Guy Consolmagno puts its (01:08:40) :
I can't allow my experience to try to determine your life, nor do I, nor have you heard me do that. What I do hear is a description of religion that you guys have rejected that I would reject as well. And if that's what you think religion is, then by all means get rid of it - that's a horrible idea of religion.... A lot of people think religion is what they thought they heard when they were 11.
I suspect that Gauss, Newton, Maxwell,  al-Haytham and Al-Biruni would have shared similar sentiments. Religion and science don't always get along - sometimes through the fault of the religious, and sometimes through the fault of the scientists. Yet it doesn't have to be this way. It is only the peculiarly extreme forms of religion and science ("mine is the one true way") that force a conflict where none need exist.