Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Saturday, 1 December 2018

You Can't Tell Me What Not To Think (I)



Marine Le Pen, right-wing French nutter, was recently "uninvited" to a technology conference. Nicola Sturgeon, leftist leader of Scotland, is boycotting an event featuring badly-groomed nutcase Steve Bannon. Predictably enough, these choices triggered concerns that they might provoke a sort-of variant of the notorious backfire effect, in which an attempt at persuasion causes someone to hold more strongly to their original opinion. The hope is that by banning a speaker, they have less of a platform from which to attract supporters and less legitimacy; the fear was that the ban would attract more attention than if they were allowed to attend. Similar claims and concerns beset every incident of "no platforming".

The backfire and Streisand effects are distinctly different : the backfire effect is when an argument causes or strengthens the opposite belief it was intended to, while the Streisand effect is when restricting speech attracts unwanted attention. Both share the common feature of being an effect opposite to the intention, and while not the same they are not mutually exclusive either. Both relate to the moral principles and effectiveness of attempts to control the flow of information.

This is the first of three posts in which I'll concentrate on the effectiveness. In particular, under what circumstances do bans successfully prevent the flow of information and when do they have the opposite effect ? When they backfire, do they cause a wider spread of information or does it only strengthen existing belief, or both ? Or neither ? When does exposing a viewpoint only harm its supporters, and under what conditions does it help normalise the idea and help it spread further ?

In this first introductory post, I'll concentrate on persuading individuals and the effects that arguments can have on them in isolation. In particular I'll give some extreme examples to illustrate the main point : that it's not the ideas themselves which matter, it's their conditions which govern whether they will attract unwanted attention or simply fall on deaf ears. In the second I'll summarise the more general, typical causes of successful and unsuccessful persuasion. Finally in the third I'll examine the flow of information in groups and why this isn't as simple as the persuasive techniques used between individuals. Although there are plenty of links to my own musings, for the most part I've tried to make this one into more of a compendium of findings from actual psychological studies.

So, let's begin.



When the backfire effect doesn't happen

The backfire effect is so popular on the internet that sometimes one gets the impression it's an inviolable law of nature that can never be avoided. Let's start by knocking that one on the head and start with some simple hypothetical examples that may lead us to more general trends.

Do you always decide to believe the opposite of what anyone says under any circumstances ? Or is your sense of morbid curiosity always inflamed whenever someone tells you not to do something ? Of course not. If your dear old mum tells you it's raining outside, you don't insist that it must be sunny. And you definitely won't insist on continuing to believe that it's sunny if she was proven correct on checking. That level of irrationality is very rare indeed. Even if you already thought that it was sunny - say you'd looked outside yourself a few minutes ago - what this new information will generally cause is not, I'd say, active disbelief, but simple, momentary confusion. You probably won't even feel inclined to check at all : you'll accept it without question, on trust. In fact you won't even care.


Or suppose some lunatic claimed to have discovered a goose that laid golden eggs bitcoins, and no news outlets bothered to report it. Why would they ? Lunatics say crazy things all the time by definition. There's no news there. What would happen ? Simple : no-one apart from those within shouting range of the lunatic would have any idea of what they're saying. The idea might still spread by word of mouth, but because it's such an uninteresting story it would spread very, very slowly - even on social media. It would probably not reach many people*, and hardly anyone at all would be believe in the magical goose**. It'd be discussed but only because it was silly, and only other lunatics would actually believe it. If anything, it'd be the lunatic causing people to disbelieve them by shouting obvious nonsense, not the media who weren't reporting it. Not even if people discovered the media weren't reporting it.

* It's an under-appreciated fact that most things people say on the internet do not go viral. The overwhelmingly vast majority of discussions remain extremely limited and don't propagate in the slightest. Or maybe it's just me being reeeeally boring... ?
** Not to be confused with the Magical Moose, of course.

So in this example the backfire effect would occur, but it would obviously be the lunatic who was distrusted, not the media. Why ? Because anyone with an ounce of rational judgement can instantly say that the madman's claim is far more probable to be nonsense, and thus they would lower their trust level of the lunatic and possibly even increase their trust in the media (if they had reported it, they'd have been criticised as being gutter press - "give us some real news !"). Only other lunatics would behave in the opposite way. There would be no Streisand effect at all, and no backfire effect on the media.

Or to put it another way, people have access to multiple sources of information. Shouting about a bitcoin-laying goose is in such absolute contradiction to all of those other sources - science, direct observation, etc. - that it couldn't be believed by anyone rational.

Now imagine that the media decided to actively report on the fact that they weren't reporting a story. "Nope, there's definitely nothing interesting about ol' farmer Bob, he's just a crazy old coot." Well, if there's nothing interesting, why in the world are they even reporting it ? Suddenly I really wanna know who farmer Bob is and why they're not reporting on him ! An active, promoted ban is like having evidence of absence (there must have been something to search for, something that can be absent), which is very different from a complete lack of reporting, which is more like having absence of evidence.


Yet while this will certainly draw attention to the issue that it wouldn't have otherwise had, and it might sow distrust in the media, it still definitely won't cause more people to become more receptive to the possibility of magical geese who weren't receptive anyway. Oh, it'll cause more people to believe in magical geese, yes, but only by reaching a greater a number of plonkers than the case of pure non-reporting. It won't make anyone the slightest bit more susceptible to believing nonsense. There's a big difference between spreading information and causing belief, and an even bigger difference between making people hold rational conclusions and them actually being capable of rational judgement.

And the thing about actively-reported bans is that these effects are largely temporary anyway, because few of them stay actively reported for very long. Stories about people being unwelcome in certain venues tend to last about a day or two in the media but rarely longer than that because they quickly become uninteresting non-stories. Only very rarely does this ever seem to cause a viral story with mass attention it wouldn't have otherwise received. Novelty would seem to be an important factor - usually when a speaker is barred, we already know the gist of what they would have said, whereas with information - books, music, academic papers - curiosity is inflamed. The Streisand effect seems to be more related to what they wanted to say than who was trying to say it.


All this presumes both that most people are basically rational and that the media are trusted. The situation is completely different if there's a chronic culture of media untrustworthiness : if the media are not trusted, then nothing they say can be believed. And how can you have people behaving rationally and believing rational things if they have no coherent, reliable information to assess ? You can't. Isolated examples of nonsense simply cause the inherently irrational to more fully develop their own irrationality; in contrast an endemic culture of reporting gibberish inevitably has the entirely different effect of actually making people irrational. You cannot have a rational society without trusted information.

Which is why, given the chronically irrational state of British politics, there's a non-negligible chance I'll be voting for these guys at the next election.
So we know for certain that at the very least, information doesn't equate to an automatic backfire effect. Politicians don't try and win approval by always saying the exact opposite of what they really think. Advertisers don't tell you their product is utter shite. We don't lie to each other the entire time. We're often more rational, or at least more honest, than we give ourselves credit for : I'd say that our irrational, incoherent behaviour is only unleashed when we have to deal with situations our evolutionary history hasn't prepared us for. Basic social interactions are therefore fine, but understanding statistical methods and interpreting complex data - that's where we fall down.

We also know that there are entirely ethical persuasive techniques that give measurable, significant increases in appropriate responses that don't use or cause the backfire effect at all. And there are methods of argument that are specifically designed to avoid the backfire effect. More on these in part two.

At least one study has gone to the extreme and claimed that the backfire effect actually never happens. #Irony, because that surely encourages believers in the backfire effect to believe in it more strongly... it's most likely an overstatement, but the study was quite careful so we shouldn't dismiss it out of hand. It suggests, I think, further evidence that it's not inevitable, and may depend much more strongly on how the counter-arguments are framed than we might have guessed. The broader social context - e.g. current global affairs - and raw factual content may play only minor influential roles. Stating the blunt facts is not enough in itself to cause a backfire, even if they have obvious, emotional consequences. Rather it's about how those statements are presented and in what context : who says them, how they make the other side feel, and what alternatives are available.



When the backfire effect does happen

The backfire effect may not be ubiquitous, but it's hardly a rarity. Think of the last time you heard a politician speak, and I bet there's a fair chance you ended up believing the opposite of what they intended - unless you already agreed with them. Or better yet, not an actual politician but a political activist : someone trying to promote belief in broad ideologies, not necessarily specific policies.

The backfire effect happens largely when you already dislike something : either that specific piece of information, or, perhaps more interestingly, you have other reasons to be predisposed to disliking a new idea. There's also a very important distinction between disliking the source and disliking what it says. For example, I loathe Gizmodo's sanctimonious reviews of apolitical products, e.g. movies, but they tend to make me hate Gizmodo itself more than they ever make me hate whatever it is they're reviewing.

It doesn't always react like this to all new concepts, of course.

Even more subtle, you might hate an argument but that doesn't necessarily mean you don't accept it. The acceptance level and emotional approval of an idea are surely correlated, but they're not the same thing. I mean, Doctor Who is wonderful and all, but it's not a documentary. I might accept its moral teachings but reject its wibbly-wobbly, pseudo-weudoy "science". Or I might reject that it has any applicability to the real world at all, including its moral commentary, but still find it entertaining. Political satire provides another case of enjoyable lies that attempt to reveal deeper truths.

Perhaps the simplest case might be when you've already formed a firm conclusion about something and then you encounter a counter-argument. Depending on the amount of effort it's taken for you to reach your conclusion, and especially if you're emotionally invested in it, the opposing view may not go down well. But even here there are subtleties - again, it might just cause you to dislike the source, not believe any more strongly in your view. And even that might be temporary : when you just casually read something you think is stupid on the internet, you can simply say, "that's stupid", move on, and that mildly unpleasant incident might not even persist in your long-term memory at all. Things you dislike can thus have no impact whatsoever.


There are a couple of possibilities which might cause a genuine strengthening of your belief. The argument itself may do this, depending on how it's framed : saying, "all Doctor Who fans are stupid" isn't going to stop anyone from watching Doctor Who, and might even make them enjoy it more : it induces a group identity, they will now make their Doctor Who fandom a - slightly - more important part of their identity. Or it may simply press the wrong buttons, using emotional rhetoric ("Daleks are shite !") for people who want hard data ("Daleks only exterminate 24.2% of people they interact with !"), or vice-versa, only using hard data for people who need an emotional component. Or it may use ideas and evidence the opponent is already convinced aren't true, automatically giving them reasons to be suspicious. Or perhaps the supporting reasoning is fine but only the final conclusion isn't compatible with their existing beliefs, or at least isn't presented as such. There are indeed very many ways in which a persuasive technique can go wrong, with large variations between individuals*. Unfortunately, one man's compelling argument is another's pointless and deluded ranting.

* Witness the success of targeted advertising, with the significant caveat that advertising isn't terribly effective so improvements aren't necessarily that difficult.

A more interesting failure may occur with rebuttals. One important lesson from school debates (and indeed the above links), where we often had to argue positions opposite to what we initially thought, was that this emotional attachment does cause a true shift in stance. Getting people to argue for something gives them a sense of ownership; it becomes part of their identity (not necessarily a very important one, but it doesn't have to be). Counter-arguments are then registered by the brain in the same way as attacks, even if the argument wasn't personal at all. While it surely helps to avoid saying that the other guy is a stinky goat-fondling loser whose mother was a cactus, that's not always enough. Mere argument with the issue itself is subconsciously perceived as an attack whether we intend it as such or not (though in a carefully managed situation explicitly framed as a debate, this is not necessarily the case).

And we're not always on our own either - we may be part of a group. Feedback plays an important role. If people praise us for what we say, we're more likely to believe it. So if we're receiving praise from "our side", and criticism from those smelly losers, then even the carefully-stated, well-reasoned statements of the other side may struggle in vain and cause us to dig our heels in deeper. This is perhaps why certain philosophers preferred to debate people one-on-one or in small groups, never en masse.


Note the key difference between changing a strength of belief (making you hate Doctor Who even more than you already do) and changing the stance (making who hate Doctor Who when you previously liked it). Techniques to cause both effects are not necessarily the same : insulting members of different groups isn't going to cause them to like you, but it may strengthen the bonds within your own group. Most political memes, on that basis, seem to make a massive error by trying to rally support, seeking to motivate the existing troops rather than gain new recruits. More on that in the other posts.


Who do you trust ?

So while hating the source doesn't automatically lead to disbelieving an argument, it can eventually have an even stronger effect. One possibility is, as mentioned, that we don't trust anything at all, becoming pretty much entirely irrational about everything. Another is that we fall victim to extreme tribalism, entering what some people call an echo chamber, an epistemic bubble, or what I like to call a bias spiral.

When you despise a source strongly enough, you may be capable of hearing their arguments but not really listening to them. Every vile word that comes out of their whore mouth is processed not as legitimate information, but wholly as evidence of their partisan bias. If they give you evidence - even really good, objective evidence - that counters your belief, you may see it only as a further indication of their own ideological ensnarement. This is both a manifestation of and route to absolutist thinking, where you already know the facts so any contradictory statements are simply evidence that the other side is lying, stupid, or themselves trapped in their own cult-like filter bubble. You have in effect become inoculated against opposing ideas, and can listen to them freely but with little chance of them ever making any difference. Everything the opposition says can have a backfire effect. Which isn't good.

Not listening is bad enough, but perverting evidence is much worse.
And that's where I think people tend to get confused about the effect restricting information has. It doesn't necessarily mean the Streisand effect will cause the idea to propagate further. It won't necessarily mean the idea gains a single new devotee. But what it can very well mean is that in certain circumstances existing believers become even more convinced, if not of the idea itself then certainly of the bias of the other side. The effect on non-believers can be more subtle, but more on this in part three.

That's a really extreme case, of course. A more common, lesser manifestation is activist thinking : at some point, we all want to convince others of something because we genuinely believe we're right. Someone who is on an active campaign of persuasion isn't themselves very receptive to persuasion. They've made it their mission to change everyone else's mind, so psychologically it will be extremely difficult for them to admit they're wrong. Techniques here have to be slow and subtle, gradually undermining the reasons for their conclusion but never letting them suspect this threatens the conclusion itself; gradually getting them to find ways to undermine their own idea rather than telling them they're a nutcase (even if they are).

Which of course raises the issue of how to judge if someone is crazy, and the even more complex issue of how to decide what's true (which for obvious reasons I'm largely avoiding). One fascinating idea is that you should examine their metaknowledge : ask them how many people agree or disagree with them. The most informed people who have investigated ideas in the most depth don't just know about one option, but have broad knowledge of the alternatives as well. And this means they have the most accurate knowledge of how many people believe which idea, which is something that can be reliably and directly measured.

So if you're looking for someone to trust, look for someone with accurate metaknowledge; if you want to judge which idea is correct, look not just at the broad expert consensus but the consensus of the experts with the best metaknowledge. For example most Flat Earthers are probably aware they're a minority, but most of them will assess this far less accurately than genuine astrophysicists.

This won't be perfect. Science isn't a done deal, because that is simply not how it works. All you can do is judge which theory is most likely to be correct given the present state of knowledge. Understanding of group knowledge is a good guide to this - but of course, it has its own flaws.

Persuasion also requires time. Of course when we do enter into activist mode, as we occasionally must, we all hope that we'll argue with people and convince them in short order, landing some "killer argument" that they can't possibly disagree with. This is a laudable hope, but it may not be a reasonable expectation. Instead, presume that the best result you'll achieve is to plant the seed of doubt. Arguing for uncertainty first gives that seed a chance to germinate later. We'll look at some of the reasons for this in parts two and three.



Summary

People are weird and complicated. They don't have buttons you can press to get whatever result you want. But there do seem to be some plausible, useful guidelines. In general you can't just say, "go over there !", but you might be able to steer them in the direction you want them to do. For instance, consider the extremes :
  • If you tell someone who trusts and respects you some information that doesn't conflict with their ideologies, is consistent with their existing ideas and expectations, only adds incrementally to what they already know, makes them happy, is trivial to verify, and you say it in a nice, respectful way, then there's very little chance of it backfiring. The worst you can expect is that they might go and check it for themselves, if it's something that excites their curiosity. They won't even do this if they don't care about it. 
  • If someone despises you and thinks you're an idiot, and you come along with some idea that's completely at variance with their established beliefs, requires a huge conceptual shift in their understanding, makes them angry, makes them start arguing but really requires expert analysis to check properly, and you garnish it with insults, then this is almost certain to backfire. At the very least, it's ridiculously unlikely to persuade them of anything, and might just cause them to hate you even more. If they didn't care about it before, they may start to do so now. And if it's something they already care about, curiosity will be of no help : they might start to further rationalise their position rather than examine it.
So our default assumption that if we get into an argument with a stranger on the internet that they'll definitely, definitely be swayed by the sheer force of our evidence looks pretty ridiculous when you stop to think about it. Even leaving aside the emotional, irrational side of human nature, if we were to trust everyone who said anything to us in any circumstances, we wouldn't live in a Utopia of rational thinking. We'd be a bunch of feckin' idiots. The idea of an intelligent thinker being completely unbiased may even be fundamentally impossible. In short, rather than assuming that the other guy is the emotionless, ever-logical Spock, start by assuming that they're the prickly, sensitive Dr McCoy. You may not like their irascible nature, but you damn well have to account for it.

Of course, if it turns out they're actually more like Spock, then you'll have to change tactics.

And we've also seen that there's an important distinction between your opinion of the source and your opinion of the idea. The two are interrelated, but in a complex way. Arguments you dislike cause you to distrust a source, and vice-versa. It really should come as absolutely no surprise to anyone that if you get people to listen to opposing political opinions, i.e. people they don't like saying things they don't like, they become more polarised, not less. It's nothing to do with people just not listening to the other side, that's not it at all. Although people do sometimes attack straw men, in my experience this claim is massively over-used. More typically, it's precisely because they've already listened to the other side that they decided they don't like them, so further contact isn't going to be of any help.

Yet in other circumstances the opposite happens. The conditions in which people trust the evidence presented are not entirely straightforward : more on that in part two.

While I'll look at how information flows in groups in part three, we've already seen how attracting attention to an idea is largely a matter of curiosity. I've claimed that this means the Streisand effect occurs during a botched attempt to hide information. No-platforming efforts seldom cause prolonged mass interest, except perhaps to those who weren't familiar with the speaker anyway, whereas banning, say, a new music track has often caused sales to spike. Reporting the ban inflames curiosity even further, whereas just not mentioning it all can be much more successful. Banning is more likely to cause believers to harden their position than it is to create new believers.

Superinjunctions, a law forbidding the reporting of (say) a crime, the alleged criminals, and the superinjunction itself, are perhaps the worst case : even a tiny failure, a minuscule leak, will send curiosity levels soaring because so much information is hidden. It's not really the people involved that excites curiosity, it's what they said and did.

I'll end with a personal example. Back in 2016 there were rumours that a British celebrity was involved in a sex scandal. Nothing unusual about that, except that the British media were forbidden from reporting any details at all. Had they just told us it was Elton John, I wouldn't have cared an iota. But they didn't, so I specifically trawled the internet just for the point of finding out who it was. I still didn't care about either the issue itself, or even who it was, really (at most I was mildly curious), I just wanted to see if I could overcome the "ban". It wasn't, I have to say, entirely easy, but it wasn't impossible either. At first the difficulty became a motivation to continue, though after a while it began to seem like a motivation to stop. However, had I been a bit less motivated (or just busier), or had the search been just a little harder, the super-injunction could have succeeded.

More of that in part three. Next time I'll look in more detail at how techniques of persuasion work, and how while people can sometimes be extremely easy to convince, at other times they are impossibly stubborn. Knowing persuasive techniques can help, but sometimes it's better to step back and not enter the discussion at all.

Sunday, 11 November 2018

Now Entering A Seminar-Free Zone

It never rains but it pours ? I seem to have a year's worth of talks compressed into a three-week period, which makes my head hurt.

Last time I went briefly gallivanting around the mean streets of Strasbourg. Exactly one week later I had to repeat the seminar in Ondřejov, because the theoretical physics institute have an annual field trip there. Ondřejov is a small village about half an hour's drive outside of Prague and is home to a pizzeria and the bigger half of the Astronomical Institute but nothing much else. This, as you might guess, is due to the same reason astronomers have a backwards way of measuring brightness and a spectral classification scheme that goes OBAFGKMRN : history.

Ondřejov  (last year) in winter is a bleak place.

A hundred years ago, if you were a Czech looking to set up an observatory, Ondřejov looked like a decent spot. Prague was still small and distant enough to limit light pollution, but close enough to have access to the infrastructure of a large city. You didn't need the kind of massive facilities of today to do valuable science, much less have to set up your telescope at the top of a Chilean mountain to minimise atmospheric effects. So if you were a rich Czech noble (who'd made a fortune making optical instruments to measure the alcoholic content of beer) with a penchant for astronomy, it wasn't a bad place at all. And as it happened that's exactly what Jan Frič was, so he built a small observatory there.


This turned out to be an astute move and the institute grew over the years, eventually transitioning from the old mode of gentleman science into administration by the Czech Academy of Sciences. And since the political forces of history are just as inviolable as the laws of physics, in 1967 the observatory gained a 2m telescope - at the time, the 7th largest optical telescope in the world.


A 2m telescope is not terribly impressive by today's standards, but it's not to be sniffed at either. You can still do perfectly good research with such an instrument. Still, nobody in their right mind would put such an expensive piece of kit somewhere that only gets 60 clear nights a clear. Astronomy, let's face it, is generally something you put money in and astronomy comes out, but 60 nights a year means your money isn't going to get you much scientific bang for your Czech buck*. But history has declared that's what we've got, and arguing against history is largely hopeless. Fortunately, the telescope is facing renewed efforts to maximise the possible scientific returns thanks to the still-youthful field of exoplanets.

* The Sloan Digital Sky Survey is one of the most important surveys of modern times, and that only has a 2m telescope as well, but it's in a much better location with more modern equipment.

Anyway, Ondřejov is a nice place and I figured it would be worth repeating the Strasbourg seminar to a brand new audience and get double the use out of the not-inconsiderable preparation time. Adapting it was more work than I thought : it had to be five minutes shorter, but then I realised there would be undergraduates present as well as academics, so I had to make it simpler too. Which meant a lot more of me giving preliminary practise seminars to empty rooms to get the timing right.

Before the afternoon seminars kicked off, we had a tour of the historic part of the observatory - from the director, no less. Which was very nice, especially because a) I'd never been in any of the old buildings before and b) practically nothing is in English. I still don't know what everything is, but mechanical computers and other instruments from a bygone era are always fun to look at. Here's a bunch of pictures with close-ups of the description panels for enthusiasts.


An early Occulus Rift, which wasn't even in colour.

I want one. Dunno what I'd do with it, but who cares ? I'd look cool doing it.



We should revert to this style because it was the best one, dammit.


Again I've got no clue what this is but it looks nice.

This one I do know : it's a Frič polarimeter, used for measuring the alcohol content of beer. The Czechs still express alcohol content as a polarisation rather than a percentage.
Then there was a seminar by someone else, which was very good, and then there was mine, about which I make no claims. It's always fun to get a large audience to wear 3D glasses and the physical data cube is always a hit. Which is good, because seminar preparation is pretty draining for me. Two in a week would have been a healthy limit really.


By this point I was pretty tired, but prepared to sit through another talk or two. I'd aimed mine specifically at people who might not be observational astronomers by training, as had the first guy, but the others... hadn't. First there was one on relativity, which was very clearly targeted at a highly specialised audience. Which to be fair constituted most of the group but I understood practically none of it. It didn't help that the speaker seemed to be monumentally unenthusiastic, a widespread phenomena that I simply don't understand.

Then there was a break, followed by more talks. I don't even remember what the next one was about at all. Then another talk started, something about molecular physics which looked much more interesting but I was already bored half to death. Mercifully I managed to escape with some other people who were being evacuated back to Prague.

On the train, the relativity dude turned out to be a normal person who just seemed completely exhausted. Fair enough. Our other companion, however, was one of those people you meet in science - someone who's clearly a space alien. In this case he mostly sat in brooding silence, but would occasionally and without any provocation or context start blurting out his hobbies for no reason whatsoever. First we got to hear about rollerblading, which you can at least make some pathetic small talk about, "I suppose it's good exercise"; "not much fun in winter"; that kind of abysmally boring "conversation" that does nothing except expend time and further the progress of the Heat Death of the Universe.

His second unprovoked meanderings were about his efforts to write a novel. Something about a physics lecturer who meets a piano teacher who teaches him the true meaning of Christmas, or to see beyond the equations and how to become socially acceptable. Some pointless nonsense like that. I forget exactly, because it was such a "the hell am I listening to ?" moment that my brain was fighting to decide if that was really what he was saying, whether I might be missing something essential, or if it was just too dang tired and would prefer to just shut down down now if that's not too much trouble.


A fair chunk of the weekend was spent preparing the third talk, a much shorter one at the IT4I supercomputing centre in Ostrava. This one had to be prepared largely from scratch, since it was aimed at an almost entirely non-astronomy audience. It consisted largely of infographics from my last science post, which I think was a good idea. This mini-conference was a one-day event aimed at bringing together users of the powerful computing facilities at Ostrava. This mostly seems to be researchers of the very small : quantum physics and genetics, that sort of thing. So keeping things ultra-basic and simplified is the only realistic way to explain what we did with the ~400,000 core hours in 12 minutes or less.

(The other talks were a mixed bag. Some were good, some were awful, one was clearly intended to be 45 minutes long but the session chair said nope. There was a lot of terminology I didn't understand and some I suspect to be typos : antiferromagnetic, radio zebras, health breast phantoms. Regular physics is weird.)

Preparing the presentation didn't take all that much time, and the nice thing about 12 minute talks is they don't take long to practise. But because the conference was one full day, and Ostrava is 3 hours away from Prague by train, we went there the evening beforehand and left the following morning. The hotel was none too glamorous either.


As for the interior, it looked nothing so much like a badly-converted hospital or dormitory.


The shower refused to point anywhere except at the wall and there were too many noisy students outside to keep the windows open. But it was functional, clean, and I survived.

All this tiring travel and repeated presentation preparation came with a perk : a tour of the supercomputing facilities. These are a very far cry from the mechanical museum pieces at Ondřejov with their punch cards : it still ranks respectably high in terms of modern global performance capabilities and is well-maintained and continuously upgraded. The tour (again by the director !) was excellent. We started with a look at the machines from inside a showroom :


It looks very science-fictiony : kept in darkness behind glass, with enough LEDs to cover a street's worth of Christmas trees.


We spent quite a while looking at it like this, with the director turning on spotlights to highlight specific parts of the machines. Eventually he admitted that these are only for show and turned the lights on properly.


The computer produces so much waste heat that they don't need a dedicated heating facility to keep the staff warm : they just use the water-cooling system that stops the processors from melting. When there's downtime - and as far as I can tell the director was being sincere - they genuinely get cold. The last time they tried to use the more usual radiators they ended up with a minor flooding problem.

The computers also consume a crazy amount of power. To prevent damage or data loss by power interruptions, they have two backup diesel generators. But these take about 30 seconds to start. The gap is filled by a 9 tonne flywheel rotating at 2000 rpm, which, if you've ever seen Robot Wars, you'll know is downright terrifying.

We didn't get to see the generators, but we need see the cooling systems. Unlike astronomical facilities, these are a testament to neatness and good order. Think Half Life 2 if everyone was insanely tidy.



The computer itself is behind glass for a very good reason - the air is hypoxic, with an oxygen content of just 15% compared to the normal 21%. This, apparently, is the sweet spot that makes it difficult for fire to spread but is still enough to work normally. At 13%, on the other hand, you'd pass out. It's roughly equivalent to being at the top of a 2,700m mountain : you notice it, but it's not awful.


You can't help but admire the neatness of the whole thing.

And so then we went back to Prague, leaving the hospotel at a bracing 6:30am.


Could I relax ? Nope, because I had more preparations and only two days to do them in : a public talk at our institute's open day and an escape room. The public talk I simply recycled from a previous one because there simply wasn't time to do anything other than minor modifications and figure out what the hell I was supposed to say. Only the title slide contained any text since that would require additional translation, so I had to re-invent the speech based on the images and movies. It seemed to work though, and 3D movies and props almost always help.


The escape room was a completely new idea that a few of us came up with some months back. Since we're probably going to re-use it, I don't think I should give away too much. It's slightly different from the usual escape room concept where you've got an hour or so to figure out how to escape a locked room (typically themed, with various puzzles to undo locks in a particular sequence). We'd tried one where you have to escape a plane, which was quite fun but the puzzles didn't seem to have much relation to aircraft or the story. We wanted to fix that and make it at least related to astronomy (if not genuinely educational, which would be asking a bit much).

What we eventually came up with was a story of a scientist who's made some major breakthrough but been abducted. The players have the role of his students who come to wait in his office. Instruction by telephone from another scientist tell you that someone's just come to his house and are now heading towards the university. Players get 45 minutes to discover his secret research and email the results (an alien signal) to the outside world before the evil corporation come along to suppress it. The puzzles are astronomy based, including the Hertzsprung-Russel diagram, exoplanet radial velocities and cross-matching galaxy catalogues. And there are also simpler puzzles involving astronomy-themed chocolate. Since this was in Czech, my role was limited to making some of the documents the player's need, including a Pioneer-style plaque identifying where the aliens are from.

I decided the aliens should be like the aliens in Commander Keen but happier.
Which was a lot of fun even to test. The puzzles were more difficult than we anticipated (one test group found an important prop but insisted on putting it back where they found it before using it properly...) but eventually, with enough hints, we got it down to being solvable. The fastest group did it in 30 minutes.

And then I went home and collapsed.


(Normal blog services of extended rants will be resumed shortly)

Sunday, 28 October 2018

Strictly Come Strasbourg Science Seminaring

Hey look, a travel post ! Remember those ? I keep forgetting to write them.

3:30 is not a time that should ever occur in the morning, and if it does, it should only happen because of pub-related shenanigans. Unfortunately, if I wanted an expenses-paid trip to Strasbourg to give a seminar (which I did), I'd have to both avoid the pub and haul my sleep-deprived self out of bed at that ungodly hour onto a plane. Worse for me is that as well as being too paranoid to risk getting the 5am metro to catch a 6:30am flight, I'm so paranoid about missing flights that I barely slept at all. Though sunrise from a plane is always nice, even if you're heavily sleep deprived and irrationally scared of missing flights that someone else has paid for.


Toward the end of the second, mercifully uneventful flight, the little jet descended from the rosy dawn into the grey gloom of Strasbourg.


By Welsh standards, this weather - i.e. not raining - is positively delightful. And it got better later on anyway. From the airport so small you could practically spit from one side to the other, it was a simple ten minute train ride to the city centre station. Which is from the inside an interesting mix of classical and modern architecture, though from the outside is a ghastly monstrosity. It looks a bit like what would happen if you took a graffiti artist, a leading bubble wrap manufacturer, shoved them in a room together and gave them too much money.


Since my seminar wasn't until the next day, I decided to walk to my hotel and see a little of the city before doing anything sciencey. Strasbourg is not quite in the same league as Prague, but comparing any city* to Prague is a bit like comparing landscapes to Switzerland : it's just not fair. By more reasonable standards, Strasbourg is a lovely place with many fine buildings, a nice, compact historic city centre, and very easy to navigate.

* Except Cardiff, obviously.





What I also noticed was that the cyclists put those of Prague to shame. Prague cyclists are all damned aggressive bastards who delight in obnoxiously taunting innocent pedestrians. Yes, all of them. Every. Single. One. They'd probably prefer to mangle themselves and their infernal contraptions in your gizzards than move an inch out of their god-given right of way, preferring to suffer an extended hospital visit than grant a pedestrian the merest moment of admission that they might be at a fault. I, for one, don't like them.

Anyway, Strasbourg cyclists are to be commended. They know that cycle lanes can also be used by pedestrians and aren't always clearly marked. They don't give you any grief if you happen to be in their way. They just quietly and calmly flow around you like a shoal of elegant French fish, and if they have to wait, then so be it. They are truly an inspirational example to us all.

I got to my hotel too early to check in, so I went off to the Observatory instead. This is a grand, historic building, a little complex of old telescopes of various sizes, a planetarium, some gardens with a vegetable patch and even beehives. It's like a little country estate nestled inside the city.



Once you go in through the grand entrance, the first thing you see - the very first thing - is this :


Charming. The sort of thing that would probably be blocked by Facebook's filters, I expect. On the other side is an old wooden telescope, but what the statue's for is anyone's guess. Perhaps it's a sculpture of the unusually hunky astronomer who used to use said telescope. Regardless, it's an impressive building.


My invitation came from Frédéric Marin, friend, colleague, and former housemate. Frédéric's expertise is mainly in X-ray polarimetry of active galactic nuclei. In real terms that means looking at the X-ray emission from the searingly hot gas that orbits supermassive black holes, trying to determine the structure of the gas by other means than resolving it directly because that's fiendishly hard. This relies on relativistic, very high energy physics that's quite different to my own field of nice, sedate hydrogen clouds that don't do anything.

Frédéric also works on Space Nazis studying multi-generational spaceships, looking at how a small population could ensure it was genetically healthy over many centuries. He's found that the smallest number that could reliably ensure everyone didn't die out because of Lannisterism / they had eighteen fingers on each hand or seven malformed penises / inbreeding is about a hundred. More on that in a future post, as we're collaborating on a (submitted) paper about the farming requirements of the Space Nazis colonists.

(I'm exaggerating the eugenic overtones of the necessary breeding program. It turns out the situation wouldn't be all that bad : you would need some breeding restrictions, but actually not that limiting compared to the choices people naturally make anyway)

So we caught up on life, the Universe and everything for a while, discussing the bizarre hiring system for permanent academics in France, possible ALMA observations, that sort of thing. Frédéric is a ridiculously competent, hugely energetic and multi-talented guy who, at 32, is even managing the development of his own satellite. I kid thee not, it's absolutely mental. Then the near-total lack of sleep caught up with me and, fearing that I was about the headbutt the desk as I continuously dozed into mild hallucinations, I went back to my hotel for a very rare mid-afternoon nap. After that I spent considerable time wandering around the nicer bits of Strasbourg, and luckily for me the Sun had come out. Not in the sexual sense though, which was good because that would be really weird.





Of course, no visit to Strasbourg would be complete without seeing its world-famous cathedral with its 143m spire. Fortunately I didn't have to turn back because of snow. Unlike some other churches, it's a genuinely impressive, absolutely monumental mass of gothic stonework. Even after living in hundred-spired Prague, it's well worth a look.





Since time was finite, I decided to spurn the interior and went off to see some more of the city. I think that was a wise choice. Strasbourg struck me as an all-round charming little place, appealing both for tourists and residents.





And so the next day I gave my seminar, which went without a hitch. Normally I practise seminars excessively, repeating them to an empty room at least ten times before daring to speak to an audience, especially one of experts (given that seminars are usually at least 45 minutes long, I'm not sure people always appreciate the time commitment they're requesting when they ask me for a presentation). Fortunately this one was different : I recycled most of it from previous, recent talks, and after only three or four iterations I realised I could say this stuff in my sleep, possibly while gagged and drugged. It was, of course, about the usual stuff, mostly dark galaxy candidates and their alternative explanations.

I was a little wary that the audience might be more hostile than usual. Strasbourg may be a small city but its astronomy group has a lot of prestigious names, and features a lot of outside-the-box thinkers researching modified gravity, planes of satellites, that sort of thing. Regular readers know I'm not exactly keen on those. And the Observatory director is none other than Pierre-Alain Duc, who produced one of the most influential models demonstrating that dark galaxy candidates could be tidal debris.

(I'm not going to try and summarise the science this time, you'll have to consult the links. The rest of this post is mainly for enthusiasts.)

But the lions in this particular lions den turned out to be an affable bunch. There were some questions during the presentation and about 15 minutes of discussion afterwards, all perceptive and relevant. Duc couldn't attend but we had a private discussion for about 30 minutes or so later on. And that was useful too.

One point that keeps being raised about these dark clouds is whether their high spectral line widths could be explained by their actually being several different clouds that are all at the same position but at different distances along our line of sight. I'm confident that this can't be the case. First, there are hardly any such clouds known at all, so the chance of coincidental alignments of multiple clouds is negligible. Second, higher spectral resolution observations don't show any evidence of multiple spectral components. Third, having a series of such clouds along the line of sight but still no connections to nearby galaxies would probably make these things harder to explain, not easier.

So I think I managed to convince people that these things are at least interesting. I'm not at all sure what they actually are, and I played that card very strongly. While I still have some reservations, I lean heavily towards accepting the system that Duc modelled probably is a result of tidal encounters, even if that's not the whole story. But all such clouds ? I very much doubt it. The general view seemed to be that the high-resolution VLA data we've obtained ought to be enough to settle the matter. And my goodness, I'd like to reduce that data but it's a matter of finding time/assistance.

Duc raised a couple of points I wasn't previously aware of. One is that other ultra-diffuse galaxy candidates people have claimed to have hydrogen detections have turned out not to be galaxies at all, but more ragged stellar patches for which the traditional parameters are misleading (like using the mean when you should be using the median, only worse). They are, he says, more likely to be tidal dwarfs than giant galaxies. Though I think the ones we've found, which have enormous amounts of hydrogen with nice clear classical double-horn profiles and continuous stellar discs, are probably much more secure. So I'm confident that a possible connection between very faint galaxies (which we know exist) and optically dark, gas-rich galaxies (for which we have only candidates) is still very plausible. That's extra motivation to publish our observations.

His second point concerns Keenan's Ring. He notes that off-centre rings can indeed be produced by galaxy-galaxy collisions, for example the case of NGC 2992 :
From Duc et al. 2000. Hydrogen contours are overlaid in green on an optical image.
He also notes that the velocity difference from Keenan's Ring and M33 is not so great (~200 km/s or thereabouts). These are good points, and I wasn't aware of the the NGC 2992 system. But could Keean's Ring be something similar ?

I'm skeptical. The ring in NGC 2992 is clearly connected to its parent at two points - no such connection is evident for Keenan's Ring. The NGC 2992 ring is found at identical velocities to its parent galaxy, whereas Keenan's Ring is at completely different velocities with no evidence of any overlap. The colliding galaxy in the NGC 2992 system is obvious, and there's a strong stellar disturbance as well - neither of which is evident for Keenan's Ring. Finally, Wright's Cloud is also close to M33, and it would be a heck of a coincidence if this was unrelated to the Ring - and no such analogue is found in the NGC 2992 system. It's certainly intriguing and that's given me some reading to do, but my immediate feeling is that the differences outweigh the similarities. I don't think we're going to make much progress here without really deep data over a much wider area than we currently have.

In the end, I don't think I managed (or even wanted) to convince anyone that I'd made some shattering discovery or that I had stunning evidence for some alternative theory. But I'd set myself the more modest goal of persuading people that these objects are interesting and worth investigating, and in that I hope I was successful.

There, a post that isn't five hundred pages long and contains a bare minimum of ranting. Don't worry, normal services will be resumed as soon as possible.