Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Tuesday, 28 January 2020

Delicious Dark Fudge

"Dark matter," say the good people of the internet, "is just a fudge to make the theory of gravity agree with observations." They usually then go into lengthy monologues about how all of conventional science is wrong, look, I can prove it, here's a bar graph I drew in Microsoft Excel...

Unfortunately they do have a point : there are indeed disagreements between theory and observation. I've been at pains many times to point out that not every disagreement should have us roaming the streets crying, "Bring out yer dead !" and mourning the loss of venerable ideas. If we did that, we'd doubt everything and learn nothing.

Consider first the astonishing success of the dark matter model. Dark matter was first discovered (after a few false starts) when astronomers found that galaxies were rotating much more quickly than expected : extra mass is needed to hold them together. By far the most obvious interpretation of the data was that this extra mass was collisionless, not interacting with normal matter except through gravity. Of course, you can't postulate that the Universe is filled with a hitherto unknown substance and not have it impact a bunch of other things besides the way galaxies spin, and that led to ways of testing the theory which had nothing to do with rotation curves at all.

To do this, astronomers made a leap of terrifying audacity. Based on observations on the scale of individual galaxies, they extrapolated the results by a factor of billions and simulated the evolution of large chunks of the entire Universe.

Guess what ? It worked. Here's the simulations and observations of galaxies on the very largest of scales.

Real galaxies in blue, simulations in red.
A great big sparkly network of filaments and voids in both cases. This achievement is nothing less than stupendous. Few other ideas indeed are able to work so incredibly well across such vast differences of scale. But undeniable and inevitable grandiosities like the large-scale structure of the Universe are rare cases indeed. More usually the implications of a theory are infinitely subtler, more complex - and far more uncertain. Anyone bold enough to challenge dark matter would do well to take care if their finding really threatens the basic idea itself, or only one of those much more nuanced implications.

But when exactly ? When do we say, "this is only a minor disagreement, see, if I just add an extra two in this equation, it all works out nicely...", and when do we say, "holy crap Batman, we're going to have to start over on this one or possibly all just take up fish farming instead" ?

Well, for one thing we have to focus very clearly on which part of the theory is in trouble, and that's not always straightforward. This article expresses things better than I ever could. Theories don't exist in glorious isolation - they come with a baggage train of what the author calls ancillary hypotheses. When you formulate an idea, you're not always aware of its full consequences, especially if, as is often the case, your idea was inspired by just one or two data points. You may have no idea what it means for everything else until you start the long hard graft of crunching numbers. Sometimes it can take years, even decades, before you fully understand your own idea.

What about dark matter ? Undeniable though its success may be, even serious astronomers sometimes feel as though the theory has been modified and patched so many times that it would really be better to finally euthanize the wretched thing*. But is it really like that ? Are astronomers forever moving the goalposts to deal with inconvenient truths, or are they, in fact, still just trying to figure out where the bloody hell the goalposts were to begin with ?

* I'm on a quest to start a twitter outage without being on twitter. Wish me luck.


With that in mind, let's take another look at seven of the main challenges to the dark matter idea. Maybe some of those "fudges" are actually nothing of the sort.


0) Separation of mass

It may help to start off with a semi-fake example, so this one doesn't count. Hence I start from zero.

What'd we like to test is something absolutely intrinsic to the whole notion of dark matter itself; what we've got is a bunch of galaxies spinning weirdly. And if that's all we have to go on, we could imagine a whole variety of other explanations. Maybe we're measuring rotation incorrectly. Maybe the systems aren't stable. Maybe gravity doesn't work well on scales this large.

Fortunately we do have quite a bit more than just rotation speeds. We know our rotation estimates can't be wildly inaccurate, and we know galaxies are generally not exploding. But the idea of modified gravity, like Bruce Willis or a particularly irritating weed, stubbornly refuses to die.

"Hang on," you might say. "Surely if dark matter exists, we might be able to find it existing independently of normal matter - at least in principle. Modified gravity can't do that*. Hah !"

* Actually there is a possible grey area.

"Hmm," I respond. "That's not unreasonable. But what if dark matter was just, like, spat out by stars and then just goes 'phwoooosh !' into nothingness about two seconds later ?"

"That's just stupid", you respond. And you'd be right. I have absolutely no justification for my totally ad-hoc theory of spontaneous generation of baryon-generated rapidly decaying dark matter. But you made implicit assumptions about the nature of dark matter - perfectly reasonable, sensible assumptions, but assumptions nonetheless. Granted, these are the kind of assumptions that if we don't make, we can basically come up with any stupid theory at all, and the whole farce rapidly tends towards "a wizard did it".

The point, though, is that even this seemingly most basic tenet of the theory comes with a bunch of implicit assumptions. So for testing, we have to resort to considering the full consequences and scope of the idea, even if we don't want to. And there are plenty of cases where it's much less obvious if the underlying assumptions are on firm foundations. As it turns out, there are indeed situations where we think we've observed separation of the dark and normal matter, and I'll return to those at the end.


1) The Missing Satellite Problem

Dark matter is supposed to a) dominate the mass of the Universe and b) only interact with itself and other matter through gravity. This makes it easy to program in a simulation and it doesn't need a lot of computing power. Hooray !

We've seen how blisteringly successful this is on large scales. But when they looked at their simulations on the much smaller scale of an individual galaxy, astronomers found something that made them extremely uncomfortable :

The Via Lactea simulation on the left (which only used dark matter) and the real Milky Way and its satellites on the right, to approximately the same scale.
Bugger.

The number of dark matter "halos" doesn't match the number of observed satellite galaxies very well at all. It's okay(ish) for the largest satellites, but absolutely crap for the smallest ones, predicting about ten times as more small satellites as we've actually found. Granted, the tiniest halos in the simulation are so small that they're not expected to have enough stars (if any) to be detectable, but the problem is very much a thing even at higher masses.

How big of a deal is this ? Well, the authors of one early paper on this problem went so far as to say :
Either the hierarchical model [of how dark matter assembles into galaxies] is fundamentally wrong, or the substructure lumps are present in the galactic halo and contain too few baryons to be observed.
Pretty serious. Right from the start, astronomers realised there were at least three possibilities : 1) the whole dark matter thing was bollocks; 2) dark matter existed, but behaved in radically different ways to the theories; 3) something was wrong with the physics of which halos should become detectable galaxies - maybe most of them never accumulated enough gas to form stars, for example.

Since the disagreement between theory and observation in this case was absolutely hideously obnoxious, initially it wasn't unreasonable to say that maybe it did count as evidence against the whole sorry idea. As for modifying the physics of dark matter, that's been tried a few times with differing results : pull on one thread and the whole tapestry tends to unravel. Best to leave that one aside.

That leaves the physics of the ordinary matter. It wasn't immediately obvious what could be going on here - it seems natural that at the same mass, each halo should gather about the same amount of gas and therefore form about the same number of stars. What on Earth could make some halos detectable but keep most of them invisible ? Why should only a select few blaze through the heavens while all the rest are consigned forever to the silent dark ?

And that's where the ancillary hypotheses and implicit assumptions come in. In order to predict how much normal matter - gas and stars - gets into each dark matter halo, astronomers used "semi-analytic" models. They took the numerical simulations and applied equations to calculate how bright each halo should be. This is not at all easy. The physics of star formation is seriously freakin' hard : way harder than Steven Seagal, harder then Arnie, harder even than Wolverine. It's like, soooper hard. Got that ? Good.

The Expendables 4 will definitely be about a group of hard-as-nails astronomers very carefully working through obscure problems, facing such hazards as slow wi-fi and rejected grant proposals. It's a tough world.
In order for gas to become stars, it has to collapse to the point where nuclear fusion starts. That means it has to cool, and its cooling rate depends on both its density and chemical composition. When stars begin to burn, they inject energy into and change the chemistry of their surrounding gas. And they're not formed in isolation either : most stars are stable, long-lived, low mass little things, but a few are giants that quickly explode, adding even more energy and material back into the interstellar medium in a damnably complex cycle. But wait, it gets worse ! The effect of all this depends on the total mass of the galaxy - in high-mass galaxies, stellar winds and supernovae might only move gas around a bit, whereas in very small ones, they may be able to remove it completely. And that's not accounting for interactions with other galaxies, extragalactic gas, the totally different chemistry of the early Universe, or magnetic fields...

See ! I told you it was hard. Too complex, at the time, to simulate directly, which is why they used the semi-analytic approach.

Unsurprisingly, some early models were completely at odds with how many satellites ought to be detectable. Some said there should be loads of detectable "dark" galaxies containing only gas. Others said there should be hardly any of these at all, with gas almost inevitably leading to star formation.

You can see that the claim that the missing satellite problem is evidence against dark matter is, very credibly, total rubbish. It might be, but to test that, we need those ancially hypotheses, to do the unglamorous work of slogging through the complex physics of star formation. Without that, saying that the missing satellites disprove dark matter is to make a massive set of implicit assumptions about how the gas behaves. In fairness, the knee-jerk "this contradicts dark matter" response was not unreasonable twenty years ago, but knowing what we know now, it's just no longer tenable.

The problem is twofold. Even though normal matter and dark matter interact only through gravity, the highly complex physics of the normal matter means : 1) some dark matter halos might be disrupted (more on that later); 2) gas accumulation and star formation may mean there's a selection effect and we only detect a small fraction of the halos still present.

The complexity is such that we still haven't fully solved this. Don't misunderstand me. It could still turn out to be the case that the missing satellite problem is evidence against dark matter, but that looks increasingly unlikely. We cannot ignore all that horrible physics by simply assuming it's not a major factor. We have no choice but to try and include it whether we want to or not. That we've done so after finding a problem in no way makes it a "fudge" - it's absolutely utterly unavoidable.

But... this cuts both ways. All that complexity means a lot of uncertainty. The latest computer simulations (which can now simulate the gas and stars directly) don't have a missing satellite problem, but there are so many free parameters that it's debatable how much predictive power they really have. We've found a solution to the missing satellite problem, but it's by no means clear if this is the actual answer or just a much, much more complex fudge to save the theory.


2 Planes of satellites

Most of the other discrepancies between theory and observation are just interesting variations of the missing satellite problem. This is good because it saves me a lot of time.

In the above rendering I showed the Milky Way and its attendant satellites from its worst viewing angle. In profile, the satellite cloud isn't nearly as fat as all that - it's actually remarkably skinny.

This galaxy plane is beach body ready ! Twitter, are you listening ?
That's a problem, because clearly the simulations don't show that. They show the satellites in nice spheroidal clouds, not thin planes. Surely this is a direct contradiction of the dark matter model ?

Well, yes. It's perfectly reasonable to say that this observation contradicts the standard model, because it does. Ahh, but which part ? Therein once again lies the problem. It's very difficult to see how planes of satellites have got anything much to do with flat rotation curves or the notion of dark matter itself. But could they relate to the physics of how gas gets into dark matter halos and forms stars ? Hell yes they could.

All the same complexities of the gas physics apply just as they did in the case of missing satellites, and more besides. We could be seeing a selection effect that the gas doesn't inhabit all the halos for whatever reason. Satellites could be brought in along the large-scale filaments (rather than from every direction equally), and interactions could stretch out the satellite clouds to produce flattened pancakes. Worst of all, we only know about one plane with any certainty - all of the other claims are highly questionable at best. Things might be weird if extremely narrow planes were common, but there's no evidence of that.

Overall, claiming that planes of satellites contradict the dark matter paradigm is a bit daft, and makes at least as many implicit assumptions as the missing satellite problem. Until we know more about them, assuming that they're a problem is a massive leap in the dark.


3) Too Big To Fail

Which satellites are missing - just the littlest ones, or the bigger ones too ? This varies depending on the state of the art of both observations and simulations.

The "too big to fail" problem essentially says that there's a problem for the biggest satellites. These galaxies, though still much smaller than the giants, are so big that there doesn't seem to be any way they could possibly avoid accumulating enough stars. So we really ought to find all of them, but we don't - as though they were too big to fail but fail anyway.

This has all the same problems, solutions, and problems with the solutions as the missing satellite problem does - all of which revolve around the baryonic physics, and have little or nothing to do with the dark matter. And there are even more factors at work. See, while ordinary matter is, overall, far less massive than the dark matter, this isn't necessarily the case in every local situation. In the disc of a massive galaxy, ordinary matter can dominate. Any satellite galaxy getting too close to this disc can be torn apart by its gravity. Even those that stay a bit further away can still have their gas ionised by the hot stars in the giant galaxy, preventing further star formation*, or even removed entirely by the "corona" of hot gas surrounding the giant galaxy. None of this is included in the original pure dark matter simulations.

* This is thought to have been particularly strong in the early Universe when the first stars were highly energetic. This so-called "squelching" of the gas is an act of galactic abortion (hello twitter ?), preventing it from ever forming stars by keeping it too hot to condense.

There's one additional factor which has been proposed. It could be that in some cases we're not estimating the total mass of the galaxies correctly. We measure this from the rotation and size of the gas, but if the gas doesn't extend as far (relative to the dark matter) as in other galaxies, we'll underestimate the total mass. So it could be we've already found the most massive satellites, but misidentified them as being smaller than they actually are.

So yet again this is by no means a definitive challenge to the dark matter theory. It could be, but there's still too much we don't know about the ordinary matter to say for sure. Anyway, what grounds do we have to assume this is a dark matter problem rather than one of the more difficult and mundane physics of ordinary matter ? None that I can see.


4) Downsizing

This one's a bit different. There was once a controversy over how galaxies assembled : did they form from huge "monolithic" clouds, or did they grow from the "hierarchical merging" of smaller objects combining to form ever-larger behemoths, like the T1000 from Terminator 2* ?

* And presumably other Terminator sequels, but they don't count.


Eventually, hierarchical merging won the day. Not because it's intrinsically better -  a single spinning collapsing cloud is a much more elegant and simple way to form a galaxy - but because that's what the dark matter model said should happen. That is, given the known physics, no-one could see any reason why the necessary giant monolithic clouds should ever exist (also just like the other Terminator sequels), whereas the merging of smaller halos to form bigger ones happened very naturally. So the smallest halos should form first and the biggest ones last.

The problem is that galaxy star formation histories paint a different picture. The biggest galaxies are dominated by a big burst of star formation early on - a brief life burns brightly - whereas the smaller ones are firmly of the opinion that slow and steady wins the race.

It was never very clear to me why this was ever a big deal. If you smash a bunch of galaxies together, it stands to reason that you'll get a massive orgasm of star formation, whereas if you leave them alone, they're just going to quietly, err, mind their own business. Especially for small galaxies, where gas density may only occasionally and locally become high enough to form any stars at all. It smacks of an implicit bias toward thinking that galaxies = stars, which completely ignores the all-important gas. Galaxies grow in total mass through mergers, but it doesn't follow that their stellar mass only ever increases thanks to gobbling up other galaxies.


5) Flat rotation curves

This one might seem a bit odd. After all, flat rotation curves are the main reason people started believing in dark matter in the first place. What worried people was that the curves always seemed to be flat. Why shouldn't there be a wider variety of shapes ? Admittedly a few were found that were rising, but they seemed to be following the same basic shape as the other curves, just not extending as far.


This does seem intuitively like a problem. The implicit assumption here, though, is that dark matter is not that dissimilar to ordinary matter, which is enormously complex. Left to its own devices, we might well expect ordinary matter to do all kinds of funny things, like explode or play baseball or have a nice cup of tea. But the most popular theory of dark matter - by far - is that it's cold and collisionless. Under those conditions, dark matter halos should have pretty much universal density profiles. And since its mass is so dominant, adding in the smattering of normal matter really can't change the overall shape of the rotation curve very much.


6) The core-cusp problem

For such a romantic pursuit, astronomers are shite at thinking up terms. The core-cusp problem refers to the central dark matter density in galaxies. Simulations say it should be "cusped". The hell does that mean ? "Cusping" sounds like something wantonly depraved to me*, but sadly it just means "spiked". That is, the density keeps on rising until it gets very very high indeed in the centre.

* 100 internet points to whoever comes up with the most NSFW explanation.

By way of contrast, observations show that the real centres tend to be "cored". What, so someone came along and scooped out their innards like an apple ? No, it just means that the density tends to reach an upper value in the central regions where it doesn't vary very much.

As we've seen so often, this is a problem if and only if you make naive assumptions about how the dark matter halos accumulate material. Remember the "too big to fail" problem. We saw that the dark matter isn't dominant everywhere, and the centre of a galaxy is one such place where the ordinary matter rules the roost. So one promising solution is, surprisingly, star formation. All that expulsion of material from hot stellar winds and supernovae is potentially enough to disrupt a dark matter cusp spike, simply by the sheer mass of material being moved around. Sure, dark matter only interacts via gravity, but as anyone falling off a cliff will tell you : sometimes gravity is quite important.

This issue isn't settled yet; although it does seem like a very promising explanation, it might not be able to account for every galaxy.


7) The Tully Fisher relation is too neat !

Last one. It feels appropriate to end on something especially controversial where the end result is still veiled in the mists of uncertainty.

The Tully-Fisher relation is the very tight relationship between how fast a galaxy rotates and the total mass of its stars and gas. Nothing too surprising there : big things are bigger. The more dark matter a galaxy has, the faster it will need to rotate to remain stable, the more ordinary matter it can attract.


What's odd is that it's possible to show that the relationship should be more scattered than we see. The fainter a galaxy is per unit area, the more it should deviate. But they don't. Why not ?

We don't really know. Some alternative theories of gravity do predict a nice tight TFR, and that's a big success for them. It is indeed far from obvious how the matter in galaxies should conspire to produce such a neat relation - less obvious, perhaps, than in all of the other cases of apparent problems.

But progress is being made. The TFR is a manifestation of a much deeper relationship between dark and normal matter. The acceleration of ordinary matter turns out to correspond extremely well to the acceleration of the associated dark matter, which has been shown to be due to a combination of quite subtle selection effects. In principle, this should also be able to explain the TFR, though to my knowledge no-one has done that quite yet.

On the other hand, it looks increasingly as though that we may have hugely underestimated the scatter in the TFR : in effect, we've been assuming that our observations were representative when in fact they were not. Both faint and bright galaxies have now been found that rotate much more slowly than predicted by the TFR, while some of the most massive spirals (and some small isolated gas clouds) apparently rotate too quickly. While in general more accurate measurements seem to decrease the scatter in the TFR for most galaxies, this is by no means necessarily true of all of them.

Faint (left, in orange) and bright (right, in black) galaxies that deviate from the Tully-Fisher relation.
Right now, we genuinely don't know what those deviant galaxies really mean. The ones which are rotating too slowly are consistent with having no dark matter at all (going left => less dark matter; going right => more dark matter). From one perspective this is very bad news for alternative theories of gravity. As long as the mass distribution is the same, gravity should behave the same everywhere - so similar galaxies should always rotate at the same speed (external disturbances notwithstanding). It shouldn't be possible to get differences like this. With dark matter, it's at least possible in principle to separate dark and ordinary matter, so theoretically such objects are entirely possible - galaxies with the same mass of stars and gas could have different amounts of dark matter.

Then again, should we expect dark matter deficient galaxies to actually exist ? Only a very few have been found in simulations, and it seems like they're just the result of an extremely obscure bug in the code. Maybe the simulations don't have the necessary resolution to examine them properly. We just don't know. That awful spectre of our lousy understanding of the physics of star formation becomes all too important for objects without any dark matter at all.

And it feels decidedly odd that for so many galaxies the TFR is so tight yet others have such a wide scatter. How can there simultaneously be a small scatter for some objects yet pretty good evidence that the dark matter content of galaxies is highly variable ? Something's not quite right here, but I'm blowed if I know what it is.



Conclusions

Some of these issues are all but solved. For others, the full implications are not yet known. But in no cases is it justifiable to say that anyone has "fudged" their results to agree with the theory. It's barely even credible that any of them present evidence against dark matter at all, much less that there's some vast unwitting conspiracy by the evil Defenders Of Darkness*.

* Doesn't that sound better than "scientists who think dark matter is the correct explanation" ? I guess we have to call modified gravity theorists Slayers of Einstein or something... 

This notion of "ancillary hypotheses" and "underlying assumptions" is extremely helpful in keeping different aspects of the theory separate. It forces us to take a step back and consider what an unexpected result really means. Does it follow directly and inevitably from the central aspect of the theory ? If so, it could well be evidence that the theory is wrong. If not - if it's due to some other physical process or assumption - then it isn't.

In this case, we know that the physics of star formation is tremendously complicated. We know that gas can do strange things. And we know of mechanisms that can change our more naive expectations. What we've done here is not "fudging" but simple, standard exploration. All of the issues presented here depend heavily and often entirely on those issues, of the complex gas and star formation physics, not on the dark matter itself. 


Now it's true that many of the results really did seem shocking (even, dare I say it, baffling) when they were first discovered, because some of our implicit assumptions were perfectly sensible at the time. But continuing to cling to those early notions in defiance of subsequent, more sophisticated analyses is not science at all : it's faith.

And yet... how dashing ! How noble the acts of a lone maverick genius, bravely standing up to the slings and arrows of his dogmatic opponents, with a simple, clear message that dark matter is disproven - now that's a narrative that sells. The long and grinding process of working the problem, testing every aspect in careful detail, criticising one's way towards self improvement, that's hardly the stuff movie heroes are made of. But it's a damn sight more accurate and a damn sight more scientific than saying we're all just fudging the results.

This is not to say that fudging can't happen, mind you. As our simulations reach unprecedented levels of complexity, there's a dangerous temptation to tweak them to match observations. Sometimes this is unavoidable, since there are physical processes at work we can only observe in deep space and never test under controlled conditions. But too much of this would indeed constitute fudging the results. That, though, is more a warning for the future than a concern of the present.


Given all the uncertainties still around, have we made any actual progress on the issue of dark matter itself ? Yes, absolutely. We have extraordinarily good evidence of the separation of dark and luminous mass in multiple galaxy clusters, something which is damnably hard to do with modified gravity. We've found evidence of the separation of mass on the scale of individual galaxies too. Meanwhile, the prospect of modified gravity looks increasingly desperate. Even the most stalwart enthusiasts openly admit it needs dark matter of its own in clusters, which would seem to defeat its whole purpose. And no-one, after more than three decades, has found a good version compatible with relativity.

Doubt is perfectly natural given extraordinary claims, but this verges on prejudice. It's like dark matter kidnapped their pets and burned their house down, or something. There no longer seems the slightest advantage to modified gravity over dark matter - it replaces one highly successful idea with one that doesn't even fulfill its own raison d'ĂȘtre. While I wouldn't write it off just yet, it increasingly feels to me that support of modified gravity is becoming ever more irrational. That could conceivably change, though I doubt it will.

So here's an idea. Stop thinking about dark matter as a problem to be explained away. Think of it instead as an exciting discovery to be explored. Perhaps, so like so many ideas before it, it will eventually be abandoned, but then again, perhaps it will stand the test of time. Let the investigation carry you where it will, and enjoy it.

Tuesday, 21 January 2020

The Power Of Three


Three was a natural number for witches. When you had three, you had one to run around getting people to make up when there'd been a row. Without Magrat, Nanny Ogg and Granny Weatherwax got on one another's nerves. With her, all three had been able to get on the nerves of absolutely everyone else in the whole world, which had been a lot more fun. Because, while three was a good number for witches, it had to be the right sort of three. — Terry Pratchett, Maskerade
Pratchett was on to something here, I think. Not necessarily in the very literal sense, because human dynamics are more complicated than that. If they're properly organised, more people can bring more knowledge, perspectives and ideas : diversity really does matter. If they're not properly organized, you get groupthink, dogma, or just a simple massive row or a small war.

Recently I had a brainwave that if science is so darn good at finding out the truth, why not apply some of its hard-won lessons to other arenas ? Like politics, for instance, which seems very badly in need of a good dollop of objectivity and a lot less of tribalism. This post continues my efforts to attack this proposal and root out any weaknesses. Scientists suffer all the same weaknesses of other hairless monkeys, yet, while getting them to agree on anything is like herding cats, this doesn't seem to be much of a problem. In fact, this is even something of an asset, because everyone wants to make the next big breakthrough. In politics, each side just seems to want to break the other.

One lesson from science's truth-finding endeavour is the importance of the triple structure of peer review : author, editor, and reviewer. After finding a skeptical reviewer, the editor themselves doesn't provide extra review for the submitted manuscript, but checks if both author and reviewer are following the rules. In this way they can provide a genuinely independent, impartial check on the whole thing. The editor, in fact, is our Magrat. This works well, and I say this after having endured a year of peer review gone horribly wrong, so if anything I should be biased against the system.

There are plenty of other aspects to the structure and method of scientific inquiry, and plenty of other possible points of weakness to be examined in future, but today I want to look at how this kind of arrangement applies in society and where it's not working. The simplicity and power of this analytical triumvirate is too tantalizing to ignore.


1) The triangle of government

When I was very young, I used to think that the government was In Charge Of Things. They were sort of like elected tyrants in my mind, accountable to no-one and could do whatever they wanted. If a company was misbehaving, they could inflict whatever punishment they liked and that was that. If the dog down the street bit someone, men from The Government would show up and sort it all out.

Very, very slowly, I've come to realise that things are a teensy-weensy bit more complicated than that.

I've written a bit about the importance of understanding the networks that underpin society, and I've mentioned how I want some nice simple diagram to illustrate how the major components of society relate to each other. So it's time to try and start to construct one. At least, very, very crudely. This is all part of my super-secret ambition to work out how to make the world a better place, but don't tell anyone or they'll say it's impossible.

It seems to me that there are two major networks underpinning the governance of society. The first is the body politic itself, which looks a bit like this (you can already find umpteen versions of this elsewhere, of course) :
Okay, that should really be the executive, legislative, and judicial, but I'm a simple man and I like simple terms. In essence, and ideally :
  • The government (the executive) carries out the will of Parliament (the legislative). It has some degree of control over Parliament, but is ultimately answerable to it. It has first priority, or even sole ability, when it comes to proposing laws, and carries out things which can't or don't require a vote.
  • Parliament (the legislative) makes and repeals laws. Its agenda is largely set by the government, but it has the final say on what the government gets to do via voting. 
  • The courts (the judicial) decide how to interpret and apply the laws set down by Parliament. Courts cannot make law but they can set precedents. They decide if both the government and Parliament are acting lawfully. Neither Parliament nor the government have much direct influence over them.
Here I want to be deliberately crude - I only want to understand how the major blocs relate to each other, not how each operates internally*. I've completely ignored how all the nodes access external information - factual data, popular opinion etc (though there are a couple of very nice, slightly more complex diagrams illustrating how political principles are established here). Instead of looking at the details, let me bash out a few ideas here about how these great big nodes are related to each other in practice.

* This runs the risk of missing effects that emerge from the complex underlying structures, but we have to start somewhere. There's value in simplicity.

The connections between all three are fundamentally complex. All sides get to hear each other, so there's always a passive connection between each. But that flow of information's not going to be equal : it's going to be stronger between the government and Parliament, as both sides get access to private information from the other. The courts get some level of private information from each, but I see no reason to think that this flow would be unequal on either side.

Of much greater importance is that the connections are fundamentally different. This is crucial in avoiding a circular firing squad and helping prevent the need for infinite chains of oversight : with everyone performing different types of checks and balances, Plato's problem that it's "absurd for a guardian to need a guardian" is elegantly solved. You do want guardians to have guardians, but they all guard different things. In an academic journal, the editor doesn't act as a second reviewer of the article, but as a reviewer of the reviewer and of the author's responses - they check whether the rules were followed, not the content of the proposal*.

* Though the author doesn't really get to check if the editor is doing their job properly, except in that if they feel unfairly treated they can go somewhere else. Perhaps that's a lesson for research journals.

So it should be with the courts. If the government is the would-be author, then Parliament is the reviewer, and the courts are the editor. They're supposed to be independent, or else there's really no point in them existing at all. Their independence is partly maintained by their separation from Parliament and partly because their authority is fundamentally different. This three-way structure then becomes very powerful : it is robust, difficult to break, but also simple and comprehensible.


In practise...

But this is hardly a universal truth. Bafflingly, in America they have elected judges, which confuses the heck out of me. Worse, their supreme court is appointed by politicians. What the hell the point of that system is, I don't know. I guess if we account for this then we have to redraw the graph :
The court is here little more than a sort of parasite, feeding on whatever scraps the government deigns to give it like a particularly ugly and gruesome deep sea fish.


You might also say, "but hang on, surely the government is only a subset of Parliament, so it's not really independent from it". That's true in many countries : the power to propose a law and the power to vote on the proposals are both allocated to the same people. An easy solution - if this is even a problem - would be to forbid legislators from voting on their own proposed laws, though that's only likely to make a difference in the really knife-edge cases.

This leads me on to Britain, where there's a different problem : the court is indeed independent, but the prevalence of large majorities (lacking in recent years but now restored) for the governing party means that Parliament rarely acts as a properly skeptical reviewer - it can criticise, but it usually can't/won't enforce*. And since both sides rarely actually break the rules, the courts can do little to intervene. The government thus has near total control of Parliament - not quite total, because rebellions among its own MPs do happen, but nearly so. This means that the government is effectively a parasitical predator of Parliament.

*At least in the Commons - things are a bit better in the Lords, and this is a limitation of making these graphs too crude. I read with dismay when people complain about these "unelected peers" without ever checking to see if said peers are actually doing a decent job. We might as well complain about unelected doctors or unelected electricians !


Which is of course inspired by Futurama's mind-controlling brain slugs.


This situation, I'm coming to realise, is as stupid as having the government play a similar role towards the courts. Without having that vital triple-way system of checks and balances, the system is reliant on the goodwill of those running it : it can function, but only if everyone is genuinely committed to acting fairly. What does it matter if the government appoints the courts if it appoints genuinely fair and impartial judges ? What does it matter if the largest party has a huge share of the seats if most of its members are prepared to act against the executive ? Answer : it doesn't, which is an especially insidious obstacle. It's difficult to see the need for a reform when everything's going well, and the structure of the system doesn't guarantee it will fail, it just makes failure much easier.

And that of course is the real problem. If a government does not act fairly, it can appoint toadies to the courts and/or have no meaningful check on its actions from Parliament. In both ways, such a system tends towards my childhood idea of elected despotism. Government control of Parliament means it can make any law it wants; government control of the courts means it can break any law it wants.

Of course any reasonable democracy has provisions that means it's not quite as bad as all that, e.g. judges may have staggered terms of office so aren't all appointed by the same government, altering laws can be a slow process (especially if they are deemed constitutional matters, making any changes subject to judicial review). Ultimately things can only be pushed so far before open revolt breaks out, or more probably before the public will vote the ruling party out of office come the next election.

But therein lies the catch. The point is not that the dysfunctional systems ever reach these tyrannical end states - it's that they can be perverted arbitrarily close to them. In some ways this is worse than abject tyranny. You know where you are with a ruthless tyrant : eventually the people will rise up and may or may not be brutally crushed. It isn't nice, but at least it's simple. Whereas a perverted democracy is a lot more like a toaster that only works if you press the button down in just such a way; it's inconvenient and annoying, but since it basically functions (albeit with a lot of very serious problems*), most people are content to deal with it rather than go and buy a new one.

* Such as burning the toast and/or delivering random electric shocks and occasionally starting small fires.

Much as I love the morality of Doctor Who, when the Doctor says...
The systems aren't the problem. How people use and exploit the system, that's the problem. People like you.
... I think she is only provisionally right. Some bad systems used by good people will produce good results, but not all. And some good systems used by bad people will produce bad results, but again, not all. Systems cannot be so readily divorced by people using them - both the system and its people are inevitably caught in mutual feedback loops that affect them both.


2) The triangle of information

The second key network is on a much larger scale and it looks a bit like this (I've alluded to this before) :
This time the nodes aren't individual institutions - the body politic comprises the entire political system of the first diagram. The whole thing works something like this :
  • Laws are enacted, enforced, and interpreted within the body politic. Its laws and rulings are applied to both the media and the general public.
  • The media are the primary means of conveying information between the body politic and the public. The media interview politicians and report what they're doing, but they also do the same for members of the general public.
  • The public are influenced by but also influence the media and politicians directly. They have limited direct interactions with politicians, except through voting, but have enormous influence over the media via sales.
As I've said previously, anyone hoping to get a truly better system of government had better consider this whole shebang (see link for a nice review of the complexities of this) and not just individual aspects of it. Of course, that doesn't mean we shouldn't stop propoposing corrections to small-scale problems where we find them, because that would be silly.

This system is quite a different beast to the first one. "The public" do not represent a single entity that's capable of making an arbitrary choice - its decisions are emergent from the internal actions of its millions of individual parts. Of course, this is also true of the other nodes, but to a far lesser degree : an editor can decide which stories to run and which to pull; politicians can pick their battles. Whereas whether the public even like something or not is far more complicated. Sometimes people sit back and do nothing except feel faux-outrage. At other times they become, en masseinexplicably violent.

As evidence of people being weird, I will again cite the case of someone preferring to accidentally murder their best friend than get their grandmother addicted to heroin.
While we can propose endless reforms to the political system or how the media operates, the same cannot really be said of the public. Such an enormous, diverse group is composed of a dizzying array of networks and hierarchies, flows of information, resources, and funneled emotions. Some of those networks are purely consequential, emerging from the choices made by individuals, but others are causal, constraining the actions made by their members. Many are both. Most groups tend to reinforce their own existing norms but rarely with 100% effectiveness.

This is why any kind of proposal saying, "we just need to be better people" is doomed to fail. You can't make the public, or indeed any kind of complex group, behave more nobly just by hoping, or even by winning enough of them over through persuasion. You have to manage the network in which they're embedded, otherwise you're just pissing up the waterfall of information contradicting whatever it is you're trying to achieve.


In practise...

And as before, the above diagram is the ideal case. A lot of people are rightly worried that this will be perverted like so :


It's perfectly obvious why this would be bad : with all of the information flow coming from a government-controlled source, the potential for abuse is huge. Note that there is now no direct link between the public and the courts or Parliament, and the connection between the courts and Parliament is severed. This is another dangerous perversion of the system, allowing the government to claim, "look, we've still got the other essential parts of parliamentary democracy", though in reality it controls them both separately and independently. If the court cannot rule on what Parliament does, which is entirely at the behest of the government, it's rendered impotent - but so much as to provoke any serious kind of public response.

Again, if all parties act fairly, then there's no problem, but as before, if all parties act fairly, then there's little disadvantage to the idealised system. Of course no-one trusts the government not to misuse the power that direct control of the media would bring, so the extra regulations and bureaucracy necessary to separation are more than compensated for by the benefits of keeping the system fair and impartial.

We need not go through the plethora of historical examples of why the media as a minion of government would be bad. I'm more concerned that we've developed something of the opposite problem. The media may already act as a brain-slug controlling parasite of government, or maybe it's even worse than that.
The media doesn't feel so much like a parasite of government so much as a terrifying monster that neither the public nor the politicians feel able to challenge - or worse, are unaware of the need to challenge it. You must understand that though I'm not describing all media outlets here, I am speaking of both right and left leaning press, both of which are equally adept at spinning the same news to support whatever agenda they wish (link has a very nice apolitical example). Or worse. The default setting is not "defend" but "attack" - regardless of the story, there must be something awful about it and someone to blame. 

Even on a good day, all too often impartiality goes haywire.

This may be impartial in the strictest sense of the word, but it's not fair and it's certainly not objective. Even the best interviewers seem to have fallen victim to attacking absolutely everyone before them as though everyone was guilty of something, treating idiots and criminals the same as experts. Is that really being impartial ? Is it in any way sensible to attack experts and lunatics with the same degree of rigour ? How can the public trust anyone in that setup ?

The press seems like a dangerously whimsical beast indeed. Someone can be their darling one day, but should they breathe so much as a word of criticism of the press, then watch as they're immediately cast aside and their once-endearing qualities thrown against them. Too often the media just gets everything wrong, failing to attack what needs attacking, and defending the absolutely indefensible. This is stupid.

And reality is considerably worse than that, because "the media" isn't a single unified entity. Parts of the press will at all times attack the government and any and all who support it as though it was about to bring on the apocalypse, even if it's only trying to alter the taxation rules for strawberry milkshakes. Simultaneously, other parts of the press will defend the government even as it plans a full-scale nuclear attack against Liechtenstein, claiming that everyone hates that stupid tiny country and they're not really people anyway. And both sides will act with stupendous inconsistency : praising and shaming those they like or dislike even if their actions and motivations are identical.

The media is a terrifying monster not just because it's got big nasty teeth, but also because it's got the norovirus. It's continually vomiting from both ends over everything, submerging what's really going on in an acidic fug of atomised bile. And worse, it does this in such a way that people want to keep on buying it.

I could go on, but I won't. What I do want to point out is how incredibly important the media link in the network is. The media is a hugely dominant communications channel between politicians and the public. Even if we had both a sensible public (we don't) and sensible politicians (again, no), then a piss-poor media would still bugger everything up. You can't act sensibly if you don't have sensible information.


This all sounds very bad.

Indeed it is. So is that apparently powerful triple structure really inherently stable ? I would say no : these networks are a consequence of the methods adopted more than they are a cause of the results. A government which wins a strong majority and is sufficiently stupid can easily alter the law to affect the power of the courts however it likes. Unregulated media driven by profit will always grow monstrous.

But this doesn't mean that the triple structure of organising politics, information and so on is unimportant. Far from it. Instead, we should think of this overall structure as the goal, not the method. We should aim to create something like the ideal triangles of checks and balances, but we shouldn't be naive to think that that in itself will be enough for a stable, sensible end result. Rather, we have to consider the internal functioning of each node, otherwise we'll always be at risk : a single weak link in chain of three can quickly go badly wrong.

Ironically, we need look no further then Parliament itself for a good example. The triangle of government, opposition, and Speaker hardly produces constructive debates (though other political debates, with the same people but run in a very different way, are much better). The rules and methods used matter a great deal : the role of the Opposition is to provide an alternative, not act as a skeptical but sincere reviewer; the role of Speaker is to ensure everyone follows the rules, but these rules don't include any fact-checking whatsoever (nor is there even any necessity for politicians to respond to each other's arguments directly - it's perfectly fine to launch a counter-attack or spout meaningless drivel instead). Whereas in that other node of the body politics, the court structure of appellants, judge and jury does seem to basically work.

This hasn't really shed much light on my proposed scientific political system. The methods of checks and balances I've suggested might be enough to keep the thing from collapsing horribly, or they might not; the other unique pressures acting in academia could be crucial, or I may have missed something entirely. Which means this has been a very inconclusive post.

Let me end by summarising some of the major requirements for a better political system. First, it has a number of competing interests it has to balance purely for its own sake :
  • Competition and cooperation. Excessive competition leads to hostility, whereas excessive collaboration leads to dogmatic groupthink. But the right amount of each - competing groups who want to outdo but not hurt each other - can be a stunningly powerful combination.
  • Diversity and focus. A wide range of opinions and perspectives is essential to tackling a problem from multiple angles - specialists should not all come from the same social backgrounds. Only the smallest number of uncompromising fanatics should ever be included; there are some positions so extreme that no reasonable system can ever handle them.
  • Publicity and privacy. The need for private discussions is vital, as no-one can realistically be expected to think sensibly while being throttled by the ubiquitously hostile public/media. Yet at other points we need the system to be as transparent as possible, so that the reasons given can be subject to proper scrutiny.
  • Stability and flexibility. Probably the hardest to unify. The system needs to be able to respond to the changing needs of the day without having to completely remake itself into something unrecognisable and potentially unpredictable, being able to wield both the sharper and blunter instruments of society without falling victim to them.
And one more, an especially important one : dedication and detachment. The weirdest extremists of all are the charismatic, analytical psychopaths - the kind of people who can work out in great detail how to solve a problem but have no clue if their solution is a good one, who don't care* who gets hurt in the process, and have a gift for making their stubborness mistaken for being morally principled. Such people are rare indeed, to be sure, but they are disproportionately dangerous. Whereas your regular fanatic is committed to a specific cause or two, these people are dedicated only to their own advancement - or, worst of all, to tearing apart the system because they honestly believe it should be destroyed. These are the kind who just want to watch the world burn. More important than selecting for the extremists willing to die for their cause, the ones of extreme dedication and determination, may be to select the people who aren't even mad keen on working weekends : normal people, hard-working but able and willing to listen to reason and to compromise. The usual tactic of calling such people out when they mess up is usually successful - it's only the uncaring psychopaths who need stronger measures.

* Holding politicians to account by exposing what they've done works well for people who have the decency to care, but it has no effect at all on those who only care about themselves.

There are other things the system must do more for the sake of the society is serves than itself. Without going into what would make for an ideal state, these attributes include :
  • Allowing genuine self-determination and meaningful choice. The system must temper bad decisions with expert oversight, but people must be able to make their own mistakes that will even cause them some degree of harm. The trick is to prevent (or reduce) those mistakes from impacting others.
  • Combining different methods of decision-making. Democracy, oligarchy, despotism, monarchy - all have their place. Sometimes you need a strongman, sometimes you need maximum diversity.
  • Consider both long-term and short-term impacts. Speaks for itself, really.
  • Acts in the interests of the whole country. Not just for individual political parties or interest groups. It's going to have to manage competing and conflicting interests, often from groups who don't even realise they're at odds with each other. Somehow it must stand for all of them.
  • Willing to learn from past mistakes. U-turns that occur in response to the evidence should be welcomed, while those due to populism serve as warning signs. Trials of different processes should be used, but not the extent of becoming experiments on people.
  • Be comprehensible. And even better, be genuinely simple. One of the advantages to Universal Basic Income is that it's so simple it's impossible to cheat - the ideal political system should be like that as far as is possible, not merely accessible to the interested layman.
So that's my rant for today. Some of those aspects I've already tried to include in my suggested system, others I'll look at in the future. Some parts rely more on reforming society itself than they do the political structure. My main goal is to establish if there is, in theory, a way of organising people that will actually work, given people's actual failings and not idealising them. Never mind implementation for now, that's an entirely different question.  Such an organisation would not be Utopian, but only mean that people are basically content, wherein politics was something they would engage with but was fundamentally boring, where there'd be no point trying to seize power because no-one felt the need to trash a working system that already did what they wanted it do. I still feel the current system is far too unstable, but I'm considerably more optimistic that a better one already exists.

Monday, 13 January 2020

Paper X : The Bizarre Murder Of The Windy Strippers

So here it is : the big one-zero, my tenth paper as first author. Let there be carefully moderated dancing in the streets and joy's restraints be substantially loosened.

Party like it's 1899.
I've previously described the hellish things that can befall a luckless author should the essential evil of peer review go awry. But this occasion was such a ridiculous case that I felt compelled to describe the whole stupid saga in its own post here. In short, it was a game of silly buggers that took more than a year to publish what should be a very uncontroversial result that wasn't at all complicated. What went wrong ? I dunno. Sheer bad luck, I think.

But I shall say no more of that here. On with the science ! Or, if you prefer, a much shorter summary with hardly any jokes is available here.


Introduction : How To Kill A Galaxy

Cast your mind back, dear reader, to the heady days of 2006. Picture the younger me, fresh-faced [beardless] and fancy-free, strutting gaily through the corridors of Cardiff University's Physics and Astronomy Department, unfettered by the latter-day cares of right-wing populism in a World Gone Mad. Carefree Rhys had two main topics in his PhD : finding gas clouds that looked like galaxies but didn't have any stars, and murdering galaxies by taking all their gas away finding streams of gas from galaxies due to different environmental effects.

Long story short, we found some stuff that fits the bill quite well for the first one, which I've written plenty about already. But we never saw much in the way of streams. Considering the target area was the Virgo Cluster, a region so dense galaxies can legitimately be said to be harassing each other, this was a bit odd. It'd be like going into a house party and finding everyone is blind drunk but no-one has thrown up on the sofa. What gives ?

To understand that, it probably helps to have some background. A typical galaxy looks like this :
Note that the gas is a lot more extended than the stars. Being further from the centre makes it easier to remove since it's less tightly held by the galaxy's gravity. The whole thing is embedded in a much larger, more massive dark matter cloud called a halo, though we can largely ignore this here.

A galaxy cluster consists of hundreds or even thousands of these beasties all buzzing around like... well, I usually like to say a swarm of bees, but that's not really accurate. Time for some movies. Here's a beautiful one from the mighty Illustris "Turn On ALL the Physics !!!" simulation :


Very cool, despite a daft choice of music, but quite difficult to understand what we're looking at. A galaxy cluster is more than just a bunch of separate galaxies hanging out : the cluster itself has its own dark matter, gas, and even stars, which makes it horribly complicated. So if we want to get a feel for the sort of stripped gas features we expect to find here, we need to simplify things. Let's start with the orbits. Nice, simple, happy orbits. We think they'd look a bit like this :

Trajectories of 250 galaxies from a numerical simulation.
A galaxy falling through this spidery omnishambles experiences a number of different effects. First, it's accelerated to tremendous speeds, ~1,000 km/s or more, by the gravity of the whole cluster. Second, it gets bashed about by the other galaxies swarming around it. Since the directions are basically random and the speeds very high, each galaxy seldom spends much time in the company of any other galaxy*. That means that mergers are likely rare, since they're all moving too fast to catch each other, but they still feel each others gravity (these repeated, fast, unwanted encounters are called harassment, a term originating in the pre-#MeToo era).

* The house party is probably a bad analogy. A better one would be a heaving nightclub in which almost everyone is very drunk, kinda horny, but somewhat antisocial and super judgemental.

And thirdly, the gas in each galaxy feels a ram pressure from the cluster gas as it moves through it. Stars are too small and too dense to be affected by this, but the gas is low density and very spread out, so it has no choice but to get clobbered. If the ram pressure is greater than the gravity binding the gas to the galaxy, the gas will be pushed out and lost forever. More on ram pressure stripping here.

A galaxy consisting only of stars (left) smirks at the cluster gas as unimportant. A galaxy with its own gas (right), however, is in for a big surprise.
When it comes to stripping gas out of galaxies, ram pressure is widely believed to be a lot stronger than galaxy harassment or close encounters (at least in clusters). Stripped gas from ram pressure should be a lot tidier than what you get from galaxy collisions, but given the orbits, it can still look pretty messy. And once it's stripped, the gas feels the effects of harassment itself.

To get a feel for this, a very useful rendering comes from the webpage of astronomer Rukmani Vijayaraghavan. This simulation shows only the hot gas, both in the galaxies and the cluster. It shows very clearly how the gas is stripped and swirled and squished by the galaxies silently swarming about.

Ram Pressure Stripping of a Cluster of Galaxies from Rukmani Vijayaraghavan on Vimeo.

Why mention that this shows only the hot gas ? While the gas in the cluster can be described only as "hot and thin" (...insert joke about David Tennant here), galactic gas can be broadly divided into three main components :
  • The hot, diffuse gas shown in Vijayaraghavan's simulations. This is very easy to remove by ram pressure, but it doesn't get involved in star formation much because its heat keeps it from collapsing.
  • The warm atomic HI gas, i.e. the best, sexiest gas that we study with radio telescopes. Much denser and cooler than the the hot gas, this can also be removed as long as the ram pressure is strong enough. Exactly how (and even if) it relates to star formation is unclear.
  • The cold molecular gas, which we now think is the main component directly involved in star formation. This can only be directly stripped if the pressure is extremely high.
All of these play different roles in star formation - if we want to understand the effect of gas stripping on the life of a galaxy, rather than studying the stripped gas for its own sake, then ideally we'd measure all three components. And all three respond to ram pressure and the cluster gas a bit differently, and need different methods to detect them. This means the above animation is useful, but only as a rough guide to what we expect.

So how do you kill a galaxy ? Think of its gas as its fuel for star formation. If you remove only the hot gas, you've essentially skimmed a bit off the top. Sure, it'll run out of fuel eventually, but it's still got plenty in the tank for now. You can't really get to the cold gas much, which is actually flowing from the tank and already in the pipes. But you can still remove the warm gas, i.e. emptying the tank and letting the galaxy sputter its last bits of fuel into the engine before it finally gives up the ghost.


Where are all the bloody entrails ?

But all that fuel can't just disappear. Or to take the galaxy death analogy much more literally, courtesy of the fabulous Robert Minchin :
Neutral hydrogen is the life blood of galaxies - it enables them to continue forming stars, and galaxies that have lost their hydrogen are frequently described as 'dead'. Our radio telescope can see this hydrogen, and we can use this to find galaxies. In this new survey, we will be looking at the Virgo cluster - our nearest galaxy cluster.  This is a 'galaxy city' - lots of galaxies are crowded together and interact with each other, often violently. We act like forensic scientists trying to piece together what happened from the small bits of evidence we can find: 'wounded' galaxies that are in the process of turning into 'dead' elliptical galaxies and dark clouds of hydrogen lying around outside of their original galaxies, like pools of blood at a murder scene. These clues allow us to peer deeper into the violent world of the Virgo cluster and trace the fate of its denizens.
I could add that since the galaxies are dying through loss of gas, they're essentially farting themselves to death. Lovely. Fortunately I won't add this, because the blood analogy works much better, as we'll see.

This isn't exactly a whodunnit though. We already think we know who did it (the intracluster gas), how they did it (ram pressure stripping), what they did (strip the galaxies), where and when (using our earlier model and observations, we can calculate these), and also why (because it's physically inevitable). What we don't know - what threatens to undermine the whole otherwise elegant hypothesis - is where the hell are all the bloodstains ?

Or to put it another way, about 60% of all the hundreds of galaxies in the cluster appear to have lost significant amounts of gas, but only 3% of them show streams. Now, it's possible that a lot of them had their gas removed ages and ages ago and it's since dispersed and become undetectable. But we also see a lot of gas-poor galaxies in close proximity to those which are still gas-rich - implying that quite a few ought to be losing gas right now. So naively, we expect to see more streams than we actually do. But how many exactly ?

That's what we tried to quantify in this paper, a problem that had me worried since the heady days of my PhD. In fact, my very first presentation at a major conference was all about a new method of searching for very faint streams that hadn't found anything. The audience humoured me. The prevailing wisdom in the room seemed to be that the cluster's hot gas would rapidly heat up any escaping gas, rendering it undetectable. "It's not really a puzzle", they said. The younger me was no way going to debate this in front of a live audience*, so I said something about just wanting to test a new method, which was bollocks and I knew it.

* Or even a dead audience.

The thing is, there are some streams in the cluster. So how do they survive while most of the others are apparently destroyed more quickly than the reputation of British children's entertainers from the 1970s ?
Map of the known optically HI (atomic hydrogen, a.k.a. warm neutral gas) features in Virgo. Black arrows show streams to their actual scale. Grey arrows show smaller features and are not to scale. Black diamonds show unresolved clouds. Red, green and blue points show optically bright galaxies, while the big grey rectangle highlights our survey area. 
Known features don't show any particular pattern than could explain this. Some are tiny, some are huge. Some are massive, some pathetic. Some are near the violent, fun-filled cluster centre, where ram pressure and evaporation should be strongest, while others are on the outskirts where nothing much at all is going on. So the mechanism that destroys the streams seems weirdly inconsistent and almost magical, like a mad wizard who hates gassy strippers. Don't tell me that's not a puzzle.

In Vijayaraghavan's animation, you can see the tails flaring out as the gas escapes. But that was hot gas, and simulations of the warm (HI) gas show it should remain narrow and confined. Still, over time even this gas ought to disperse, and if its density becomes too low it should no longer be detectable. So we made a simple model to quantify this, accounting for how the viewing angle changes the appearance of a stream in our data. The bottom line is that if the known streams are representative of the general population, then with our survey we ought to detect about 11 streams (we actually detected 2) and another, larger survey should have detected 46 (it actually detected 5). Not great.

Fortunately, this so-called "geometrical dilution" is just one factor that could explain the missing streams. We also need to consider how many galaxies are currently actively losing gas (i.e. expected to have streams right now) and what happens to the gas after it leaves (i.e. how quickly it's dispersed).

That's where our earlier model comes in. One thing I will concede that the torturous refereeing process did improve was our use of our model to predict how many galaxies should be actively losing gas. I would dearly love to say we could use this to make a prediction, but the reality is that we can't - it has just too many uncertainties and the data we need is available for too few galaxies. Booo ?

Well, yes, but we can still use it to do a couple of neat things. We can say which galaxies are more likely to be losing gas right now (just not for the whole sample, unfortunately), and we can estimate how quickly the gas must be dispersing in each case. The model also still has a big advantage over measurements of how much gas has already been lost, in that it models current stripping activity* - it's just not good enough to make a honest-to-goodness prediction. At least not yet.

* Just as with real life, where it's far more important to know who's about to get naked than who already took their clothes off.


Here they are !

Having done all this, it really seemed like there should be more streams present. Plenty of objects seemed to be experiencing enough ram pressure that they should have streams, yet barely any did. Was it possible they'd simply escaped detection ?

Given that I'd spent bloody years staring at these data cubes, that didn't seem likely. I used to joke that I should go on Mastermind with the VC1 cube as my specialist subject. And I did know more about this particular data cube than anyone else alive, because no-one else had done much more than glance at it. But I was foolish to think I knew everything about it.

See, younger me expected that all streams would be spectacular, really obvious features like this one :


Which is the famous VIRGOHI21 as seen with the Westerbork interferometer. But the data I had was from Arecibo, which is more sensitive but has lower resolution. So the kind of tails we should expect to detect with this ought to be faint and appear to be very short. It was fair to say there were no features as spectacular as VIRGOHI21 lurking in the data, but this didn't mean there weren't any detectable streams present at all.

If my first mistake was not understanding what kind of features to look for, my second was over-confidence in my knowledge of how to look for them. Which didn't come from nowhere : having spent several years in creating a better data visualisation tool, I thought I must surely have looked at the data very thoroughly indeed. In fact I had, but in the wrong way.

I have a penchant for volume renders, which show the whole data present in a cube and look cool. I dismissed isosurfaces and contour plots as being not cool enough good for analysis but not for finding sources, because they inherently limit the information shown. Whoops. In fact, isosurfaces - basically just contour plots in 3D - are an awesome tool for finding short extensions, as we'll see if you just bear with me a minute longer.


The thing with our survey is that it's so damn sensitive that it makes the galaxies look incredibly bright. Unless you subtract the "glare" from the galaxy, their tails can remain forever invisible. I'd very successfully subtracted the galaxy emission in previous, more distant cases. Those were easy, because very distant objects become point sources, which have a distinct, known profile shape that's given by the telescope geometry. So you can just input the known shape and subtract it, et voila, the galaxy is removed, leaving the stream behind. Kindof like the Chesire Cat, only waaay less creepy.

From an earlier paper.
And for very close (or very large, well-resolved) sources, you don't need to remove anything : you can see by eye where the galaxy disc ends and its extension begins, if it has any. No need to do anything to the data here at all, you can just go ahead and measure it.

But in this case we were in the unhappy middle ground. The Virgo cluster galaxies were not so distant that they could be considered as point sources : trying the usual subtraction procedure left hideous artifacts that were more offputting than Justin Beiber's Lyme disease. But neither were they close enough that we could directly see the extensions and clearly state that this part is the galaxy's disc and this part is the tail. So what to do about these marginally resolved cases ?

That's where contour plots and isosurfaces come in. Our radio data images the sky just like an optical telescope, but it takes thousands of images of the same region. Each of these frequency channels shows how bright the hydrogen is at slightly different velocities. Since galaxies rotate, different parts of galaxies are visible in different channels. So when you go through channel by channel, you see something like this :

The galaxy (optical image here) slowly drifts across the image, with one side coming towards us (lower velocities) and the other side moving away (higher velocities) due to its rotation. This example shows a case where there's no extension present, so you just get nice circular contours that drift a bit. But looking at every single image is a tiresome as watching the Star Wars Holiday Special, so normally we use different display techniques. One, shown on the left below, is to sum up all the flux along the line of sight -  a volume render. The alternative, on the right, is to show all the contours at once with a different colour for each velocity (this is called a renzogram).


Ahh. This is not quite so circular any more. It's not a very pronounced effect in this case, but if the centre of the flux were to drift a bit more, we'd get an elliptical appearance even though the galaxy itself is pretty circular.

Trying to spot extensions in these complicated cases just by looking at them is... well, I'd been looking at this data for the best part of a decade and not seen them. With volume renders, the streams tend to be so faint that it's hard to set a display range for the data that shows them clearly without having them get lost in the glare of the galaxy. With individual channel maps, as well as being incredibly tedious, it's hard to know if any channel contains any features more elliptical (i.e. sticky-out extendy bits) than any of the others. But using renzograms, it becomes much much easier to see the extensions even when the centre of the contours drifts from channel to channel.

The clearest stream we detected was from the galaxy VCC 2070. The orange contours show a tail, whereas the blue contours (at higher velocities) are nicely circular. It's much easier to see the sticky-out extendy bits when you've got the circular bits visible for comparison at the same time.
And we can tweak this for display purposes. We don't have to show contours for every channel, which can be confusing. Instead we can select the channels which show the extensions most clearly. Here's our main figure showing the streams we're most confident about :


How do we select the best range ? Well, we can also plot the contours in 3D, where these "renzograms" become true isosurfaces. Here's VCC 2070 again. You can clearly see the extension is found only on one side (in velocity) of the galaxy.


And here's the entire data cube, with the surface levels chosen for each galaxy to show everything as nicely as possible. Of course, this works much better when you have a fully interactive model to play with.



So, mystery solved, right ?

Well, umm... well... yeah, actually, it is ! Given how much I've warned y'all about science headlines that use this abominable phrase  I don't say this lightly. But in this case, it really seems to be true.

Finding the streams one expects to find is very nearly as surprising as finding a tiny hedgehog in one's tea.
Now you might think, looking at the above figures, that there's not much scope for doubt about the streams. You might think that with even more gusto if you know that most of the galaxies in our data have perfectly decent circular contours with no signs of streams at all - if they all showed such features, it might be due to some problem with the data processing.

But only a few show extensions. And the number of streams well-matches our expectation, and the basic predictions of our model stand up well. Galaxies our model says should be stripping have a much stronger preference to have tails, and galaxies which shouldn't be stripping don't have tails. The morphology, mass, length and velocity of the streams all exactly fits what we expect for ram pressure stripping. Everything just works.

And yet we had a terrible time convincing not one but two referees about this. For the life of me, I can't understand why. It's true that most streams are quite faint, but they're not that faint. So we spent an inordinate amount of time proving - yes, actual proper proof in this case - that the number of similar features we expect to see due to the noise in the data is essentially zero. We injected fake galaxies into empty regions of the cube containing nothing but noise, then searched through these to see how many contained what looked like streams. And to avoid fooling ourselves, we also injected fake streams only into a random number of these tests, so that when searching we couldn't be sure if we were looking at something we'd injected or a genuine artifact.

Small selection of our fake galaxies, some with fake streams and some without. In other cases we also tried varying the direction, but here they all run from the centre of the galaxy to the right. Although in some cases a fake stream was injected but isn't clearly visible, hardly any show streams which are actually just artifacts due to the noise.
The bottom line is that we don't think any of our ten most confident detections are false. Our 16 other, less sure detections, may well have a significant fraction of false positives, but it's hard to quantify that. So we basically ignored all except the ten best streams in our analysis, to be on the safe side.

When we run the numbers, we find that we can indeed solve the mystery of the lack of bloodstains. Four factors combine to explain why so few streams were visible. First, they do exist, but it takes the right survey and the right visualisation techniques to find them. Second, you have to account for the number of galaxies currently losing gas. Then, third, the geometry of the streams affects their detectability. And finally, using the observational measurements plus our analytic model, we can calculate how fast the gas disperses. The dispersal rate seems to be fast enough that low-mass streams quickly fade from view, but the high mass ones can last much longer. So there's no magic wizard, just gas evaporating quite naturally.

(Or, if you prefer, if you prick someone with a needle, the slowly dribbling blood will quickly evaporate away. Stab 'em in an artery with a knife and gushing torrent will take ages before it disappears.)

The other thing I have to mention is the orientation of the streams. Google Plus survivors may remember I ran a poll to check if I'd identified the direction of the streams correctly. What was odd about this was that they all seemed to be pointing either towards or away from the centre of the cluster, even those that were from galaxies in separate sub-clusters that are just too far away for the main cluster to have any real influence :

Streams in our survey area. The red arrows show the actual stream direction, while green arrows point towards the cluster centre. Grey and black outlines highlight galaxies which are in more distant sub-clusters where there shouldn't be any coherent alignment towards the main cluster.
Here I will again reluctantly concede to one of the referees that it was probably better that we took this out of the final paper, but I'm not thrilled about it. What we think is going on is just small number statistics. For the galaxies in the main body of the cluster, we expect there to be this sort of alignment so there's no problem. Once you take those out, the numbers in the sub-clusters are pretty small. It would only take a couple of membership misidentifications before you could find completely different patterns - with regards to cluster centre alignment, we're essentially seeing what we wanted to see, not necessarily what's really there. The same data can be interpreted very differently, depending on what you think is significant.

If we give ourselves a little slack, we can select regions where the streams appear to be aligned in very different ways. So the large-scale alignment is probably just a coincidence.
Of course, we've only looked at a small part of the cluster, so it's still possible there are other mysterious things going on elsewhere. In fact we know there are, because as I described previously, certain features just don't make sense. And we've solved the mystery statistically, not for every single individual object. So this claim that everything is done now is strictly limited. But the fundamental problem of why we don't see that many streams - that, I think, is well and truly done.

What's next ? Well, we've got loads of other old data cubes that we could re-analyse, and brand new Virgo data coming in. With more galaxies, we might be able to say something about the orbits of the galaxies in different parts of the cluster, and see where gas loss is most active. It's also a bit of a puzzle how some galaxies are losing gas despite being in the cluster outskirts. But for now, given the years it took to get this far, it's time to do something else completely. After all, there's only so much research into the death-throes of a windy stripper that any self-respecting astronomer can put up with.