Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Saturday 24 October 2015

False Consensus

Wake up, sheeple !

Of all the allegations made against mainstream science, the charge of "false consensus" is the one that's the most worrying. The idea is that we all want to agree with each other for fear of being seen as different, or worse, that we will lose research funding for being too unconventional. Maybe individual scientists are sensible enough, but apparently some sort of "herd mentality" creeps in which can override the ordinarily sensible scientific method. It's this charge, more than any other, which I find has the most truth to it, and therefore the one I'm the most concerned about.

As far as the idea that scientists like conformity goes, that's just wrong. Wrongity wrongity wrong wrong wrong wrong wrong. Wrong.


This idea ties very closely with the idea that scientists are close-minded, which I've written about extensively here and also here. But to be fair, there is a difference between saying, "scientists prefer ideas that agree with what they already think" and saying, "scientists never consider any ideas that disagree with existing theories". The above links already deal with that, though I'll come back to it a little bit later. Today I want to focus on the question :  do we at least merely prefer things which conform to established ideas ? The answer of course is no, absolutely not : but with very important provisions.

The media is awash with "scientists make breakthrough discovery !" articles, which I've also already railed against here. But, nonetheless, scientists do like discoveries. That's what science is for. And you can't make a discovery, by definition, which isn't in some way new. But you can make discoveries which are expected (boring, but useful) as opposed to those which are unexpected (exciting, but confusing).


Do scientists hate excitement and originality ?


Source.
My big gripe with the mainstream media (traditional newspapers and television channels) is that far too little care is taken in determining whether a new "discovery" is really true and a lack of interest in alternative explanations. Most recently, that gives us the headline, "Scientists may have found giant alien `megastructures' orbiting star near the Milky Way [never mind that the star is actually in the Milky Way, not near it]" in the Independent (a paper I otherwise have a lot of respect for), as opposed to the far more modest original "The Most Mysterious Star in Our Galaxy" in the Atlantic.

I want an extension for Chrome that replaces the "aliens" dude with this image.
Yes, adorable kitty, aliens are a possible explanation. But they are not the only explanation by any means, and even the scientist proposing that we have a look for radio signals (not an unreasonable prospect) says that aliens aren't the most likely explanation by a long shot. Whereas the media likes exciting ideas (or any kind of excitement at all*, really), scientists like exciting discoveries : i.e. things which are both contrary to existing paradigms, are observationally verified, and have few alternative explanations.

* I'm linking to that newspaper and not any particular story on the grounds that it's a safe bet the headline won't be genuinely exciting. At the time of writing, it reads, "FELLAINI SENT ME PICTURES OF HIS TACKLE !".

The "alien megastructures" are currently only an idea, not a discovery. They are one explanation, others haven't been ruled out yet. Hence the media gets excited, while scientists react more like this :

Let's not lose sight of the fact, though, that many of these exciting "discoveries" are proposed by scientists, which ought to knock the closed-minded charge on the head.
I am also reminded of the case of superluminal (faster than light) neutrinos, which had the media up in arms and scientists sighing lethargically. The original statement by the scientists (which amounted to, "hey, this looks odd, we think it's probably wrong but we can't figure out why, so we thought you all should check it")  was perfectly reasonable, as it is in the current case ("if we're going to look for aliens at all, this would be a good target") and completely different to the media hype ("EINSTEIN DISPROVED ! DR WHO IS A DOCUMENTARY !").

BICEP2 provides another great example. Here, scientists were looking for a signature of inflation, a very mainstream idea, infinitely more so than aliens building Dyson spheres. Yet when it was proposed that this had been found, the scientific community reacted with strong skepticism - and later this very important mainstream "discovery" turned out to be mistaken. It's a wonderful case of skepticism at its best, questioning even things which agree with mainstream theory. So no, we don't just attack findings which disagree with the consensus. We attack the consensus itself as a matter of course - that's why it's the consensus !

The reason all this matters is because whenever something is presented as a "discovery" when it actually isn't, or when at least equally valid explanations aren't discussed or are marginalised, it makes scientists dismissing the alternatives look like they're doing so because it doesn't fit with the cosy "false consensus" world view. Clickbaiting has a lot to answer for. But then, "Promising new target for SETI" doesn't sound as exciting as "OMG ALIENS !"


But what about when unconventional discoveries turn out to be correct ?

There's not much evidence that Lord Kelvin really said, "X-rays will prove to be a hoax", but they were found by accident.
Well then that's when we do get excited ! Decades ago, no-one had any reason whatsoever to think that dark matter existed. A few people made claims that the stars weren't moving as conventional theory said they should, but the evidence wasn't great. Only in the 1970s did the observational data become good enough that there was really no getting away from it, and guess what ? Mainstream science changed its mind, even though our theory of particle physics didn't predict anything obvious that could explain dark matter.

More recently, no-one had any reason to think that dark energy existed. Then along came the results of two supernovae surveys, and suddenly we found that the expansion of the Universe was accelerating. Again, mainstream science changed its mind, even though none of our theories gave an obvious explanation for the cause of the acceleration. And in this case, despite still not knowing why the acceleration is happening, the discovery led to the award of a Nobel Prize - which is a pretty clear sign of excitement, in my book.

The idea that scientists don't like non-conformity is like saying that we don't want a revolution, that we somehow think that breakthrough discoveries are a bad thing. But discoveries are the name of the game, so this is basically saying that scientists hate their job. If you've ever talked to a scientist for more than five minutes, I very much doubt you'll agree that this is the case.


It's not that we hate exciting discoveries. It's that we hate getting excited about things which aren't true. Everyone wants to make a name for themselves by making a killer breakthrough (see BICEP2 above), but no-one wants to be remembered as the moron who gave way to premature publication (also see BICEP2 above).

Still, while I think the above dismisses any notion that we don't like excitement or unconventional discoveries, I haven't really addressed the charge that we prefer discoveries which agree with existing ideas. Maybe breakthrough discoveries aren't being entirely shot down, just suppressed and held back longer than they should be.


Well, OK, if scientists don't hate excitement, do they at least prefer to be dull ?

No, but it's good practise not be over-enthusiastic. Science is a slow, careful, and often tedious process. The big breakthroughs are undeniably exciting, but much rarer than the day-to-day process of research. We don't have Star Trek level computers yet. We can't just say, "Computer, run a simulation of solar flares with the observed magnitudes of all flares detected in the last 100 years and compute the probability of disruption to satellite communications in the next six months." The best we've got in terms of automatic language recognition is Siri, and she's not very helpful.


No, we've got to actually look through old records for the data, decide on the best sort of simulation to run, apply for time on a supercomputer to run it, investigate the vulnerabilities of satellites to electronic disruption, etc. etc. etc. for ourselves. Computers are a long way off from replacing researchers.


Science is dull by necessity. Without doing all that work, you can't get to the exciting results. We prefer exciting results which are true, but it's only by doing all those dull tasks, and making all those boring, expected discoveries, that we stand any chance of being sure we've found an exciting result when something interesting crops up. You can't know what's unusual without first knowing what's normal. And you do have to take a certain satisfaction in doing rather a lot of repetitive tasks, otherwise the process would be unbearable.

I personally have long since lost count of the number of times I thought I'd seen something really interesting in a data cube, only to find that it was either a mistake I'd made or a problem with the data itself. 95% of these never even made it to the stage of "I should check with someone else" because I was able to dismiss them very quickly. It wasn't particularly rewarding, but it's a necessary part of the process.

No-one ever gets excited when a paper that's nothing but a catalogue of detections comes out. It's simply that it's not possible to get to the excitement without doing the gruelling legwork. Real science is not much like movie science.

In short, if science appears to be boring and conformal, that's because science is hard. This relates closely to the clickbaiting articles I mentioned earlier : there is a widespread misconception that science is only interesting when it's exciting. The most notable, prominent exception in the mainstream media to this is David Attenborough. The man could easily hold my attention for a full hour talking about nothing except the mating habits of snails, and I'd be absolutely riveted. Yet at no point would I ever feel the urge to leap from my seat, punch the air and shout, "YEEEAH ! SNAIL SEX IS MORE AWESOME THAN SALMA HAYEK WATER SKIING WITH NINJAS AND LASER KITTEN SHARKS !".

Which is not to say I don't find Salma Hayek (with or without ninjas laser kitten sharks) exciting... just that if I have to choose between a Salma Hayek movie and an Attenborough documentary, it's a tough choice.

Soo... you're saying the false consensus idea is just wrong then ?

Thus far the idea of a false consensus looks to be on very shaky ground. Science is often dull and conformal, but only as a means to the exciting breakthrough discoveries which are possible. Revolutions that completely overturn long-standing theories certainly do happen, just rarely. That's because : 1) We want to be sure we're correct because we hate wrong conclusions; 2) A theory that lasts a long time has stood up to a lot of tests, making it (by definition) a very good theory and therefore harder to disprove; 3) The process of doing science is long and tedious.

I say forget trying to make people excited about science all the time, because it isn't usually exciting. Get them interested instead.

Which is not to say it isn't sometimes very exciting indeed.

The perception that we're all trying to agree with each other largely rests on the media constantly promoting things which are exciting but not true as though they were true, which results in scientists having to repeatedly explain why they're not true, making us look like we disagree because we don't like it.


Aren't you being a bit idealistic ?

So far I've painted a very rosy picture. In the real world, scientists are fallible human beings. They lie, cheat, steal, fornicate, drink, take drugs, rape, murder, create weapons of mass destruction, become assassins, vote for the Conservative party, listen to Celine Dion, and use their enemies skulls as drinking cups. Well... probably not that last one. And apart from the WMD bit, you could say the same for people of any vocation.

As with discrimination in any situation, the question to ask is : does this really represent a wider problem ? Are we seeing a flaw in the system, or just flawed individuals ?

The above is a wonderful piece of satire, but it does emphasise a mood among certain elements that the climate consensus must, somehow, despite all the robustness of the scientific method, either be wrong or just not really exist.

When I encounter reasonable people on the internet, and explain to them what I do and my personal experiences of the scientific method, they sometimes respond with something along the lines on, "Well, of course I didn't mean you, Rhys. I meant that lot. Those darn climate scientists. They're not allowed to publish anything that disagrees with their precious theory."

If that's so, then it's completely at odds with my own experience of the scientific process. I know a number of people personally who hold and publish results which are radically different to the mainstream (some are well-respected figures, some are... not). I'm also aware that human failings do cause problems with the scientific method : sometimes paper referees are overly-harsh, sometimes they will let shoddy work through on a nod if a famous name is on the author list.

(You will have to forgive me, but for obvious reasons I'm not going to name names.)

But the allegation of a false consensus is wholly different. That's saying that almost everyone, everywhere, in every institute, refuses to publish something because they think that either a) it won't get past the referee; b) if it does get past the referee, it will damage their reputation; c) never even considers the possibility / hates the idea in the first place because they're so darn closed-minded.


Options A and B are swiftly dealt with : not every publication is peer reviewed. Conference "proceedings" (the written summary of a conference) are rarely reviewed, and even when they are the review process is not supposed to be as strict as for a regular journal. Consequently they're often much more readable, but not as detailed or as reliable, than regular publications - but I've never seen a proceeding article that differed radically from the reviewed version*. As for option B : nah, also silly, I know plenty of people who don't give a hoot about their reputation - even at the expense of their funding.

* And I'll add that even in refereed articles, you can generally say whatever you like as long as you make it clear when you're speculating.

Option C, however, is more serious - because I do know people who are so closed-minded that they refuse to consider certain ideas. That includes people who are so ultra-mainstream they think we've basically got everything licked and laugh in the face of alternatives, and people who are so anti-mainstream they think their silly pet theory has trumped Einstein. Individual scientists are certainly capable of being dogmatic.

But the idea that the system as a whole is so closed is pure nonsense. MOND isn't a mainstream theory, but papers get published on it all the time (26 papers so far this year with MOND in the title). No less a mainstream institution than the Monthly Notices of the Royal Freakin' Astronomical Society has published papers on cyclic extinction events (an idea that's been controversial for thirty years or so), at least two papers about dark matter causing the untimely demise of the dinosaurs, and recently one has been submitted about aliens building megastructures around that star. And the equally respected Astrophysical Journal published one this year about looking for Dyson spheres via the Tully-Fisher relation, for crying out loud.

Maybe it was a passing Dyson sphere, deflected by the dark matter in the galactic disc, that disturbed the Oort cloud and sent in the comet that killed the dinosaurs. Yeah.
If all that's still not enough to kill off the idea that controversial ideas aren't considered, look no further than the late, great Sir Fred Hoyle. A man who, amongst other things, believed the Universe was eternal (he hated the now-mainstream Big Bang theory) and that life didn't arise on Earth (which is still not settled today*). Yet few would dare label him as anything less than a great scientist - which his knighthood attests to. The idea that there is some sort of mass silencing of ideas or publication of those ideas looks to me to be utterly ridiculous.

* I was lectured by his even more controversial collaborator Chandra Wickramasinghe.

Obviously I write everything with the bias of an astronomer. I can't do anything else. All I can say is that if climate scientists really are behaving as their detractors allege, then they are acting in a way that's preposterously alien to me.

(As far as media excitement is to blame, it works both ways for climate change. Without being a climate scientist, experience of this process in astronomy tells me to be way both of scientists claiming they've solved everything through a natural mechanism, and of those claiming we're all going to die by 2050 due to methane eruptions and there's absolutely nothing we can do about it)


Didn't you say you were worried about a false consensus ? It sounds like you think it's almost impossible.

Indeed. So why do I say that there's any truth at all in the allegation of a false consensus ? Three reasons :

1) Big science


CERN played an important role in the invention of the internet (which alone more than justifies the money spent on it), but only as a spin-off, not as a direct result of the research being done.
We need big facilities. There's simply no other way to test some theories. The problem is that big facilities are great for testing specific things, but large teams of people are absolutely lousy as a means of proposing new ideas. Everyone has to play ball to make the project work - you simply can't have three hundred people pulling in different directions; a consensus must be enforced or nothing will get done. It's hardly a true consensus if you only admit people into your group who already agree with you. Smaller groups and individuals are much more free to come up with new ideas.


Of course not all big facilities operate in the same way. Instruments like the LHC are designed to do a few specific tasks by enormous groups of people. Telescopes are generally run as observatories which are operated and maintained by a single largeish group, but usually used by dozens of much smaller external groups who have no vested interest in that particular facility, theory or even subject (Arecibo does everything from the atmosphere to distant galaxies; ALMA looks not just at molecular gas in other galaxies but also the Sun).

Big, open-time telescopes (which anyone can apply to use) are a great example of combining the power of big instruments with the creativity and flexibility of small groups. Sometimes we also need large, dedicated facilities to test single specific ideas - the trick is not to let those facilities dominate the world of science. For more on this, see this essay by noted "I love dark matter !" astronomer Simon White.


2) Publish or perish



Assessing people by their number of publications makes very little sense (disclaimer, I'm biased). Not all papers are created equal; no individual is ever perfectly objective and everyone makes mistakes. But worse, if your career depends on publishing as much as possible, you're innately encouraged to write mediocre papers that the journals can't refuse to publish but which don't actually advance science in the slightest. It may look great on paper if you've got twenty papers but it doesn't mean a thing if they're all crap.

Perhaps we need a new publication system which recognises that yes, pretty much all research needs to be published, but differentiates between style and content more precisely. Cataloguing observations is essential, but is a fundamentally different task to inferring what they mean. Currently authors options are either a) a regular journal or b) publishing in the overly-prestigious Nature or Science. There's not much middle ground. Maybe there should be more. We don't necessarily need more journals, just more sections within journals to differentiate content.

3) Science by grant



If you're a tenured professor, you yourself can do pretty much whatever research you want. At the postdoctoral level this is rarer : you're often expected to work only on a specific project*. Frequently this is because your source of funding comes not directly from your research institute or university, but an external grant agency. Your hands are tied; if you come up with a brilliant idea that's nothing to do with what you work on, you probably won't have time to pursue it because your funding doesn't permit it.

* I'm not, though as a student I didn't have time to write my own galaxy-finding code so I did it during some late-night observations. It worked pretty well, and became an important part of the resulting publication and thesis. As a postdoc, I initially didn't have time to work on my data-viewing code, so I did it at home when I was bored. Eventually it became a paper in its own right. Sometimes discoveries happen when you're most free to play around without fear of failure.

Which sucks. Professors tend to spend a lot of time teaching, managing students, and writing grant applications to hire new staff. They fit whatever research they can in around this. Consequently we have this strange system where the ones doing most of the work are the ones least free to innovate, and the ones doing the least frontline research have the most experience. The solution ? End the grant system. Give more money to universities to hire staff as they see fit. Trust in scientists to do their job and don't tie their hands by external agencies.

The grant system isn't exactly causing a false consensus, because it doesn't enforce finding only specific conclusions. But it does limit thinking and stifle innovation, which is false consensuses' ugly cousin.


Conclusions

If there is any scientific discipline in which the public might legitimately say, "well you would say that wouldn't you, you're a scientist", it is not astronomy. There are so many examples both past and present of astronomers throwing out crazy ideas and getting them published that the idea of a false consensus looks absurd. So if you throw out an idea and every astronomer shoots it down, here's a thought : maybe it really is just because your idea is bonkers.

But even in astronomy we've seen how there can be flaws in the system. Astronomy has always been a big science, nothing wrong with that provided things are managed correctly. All we need to avoid on that front is relying exclusively on huge research teams, which are ill-suited to innovation. The publication and grant cultures are more damaging.

To some extent these problems are a result of trying to quantify the unquantifiable. You can't put a number on how good a scientist someone is, and it's a mistake to try. If you've written a lot of papers, that means you're good at writing papers. It tells you nothing about the quality of the work. And yes, I have an example in mind of a famous group who do publish some genuinely outstanding research but a lot more mediocre stuff as well, though I'm not going to name names.

The grant system is similarly a result of trying to do science like it's a business : this person shall work on project X, this one on project Y, we shall all be in work 9-5, we'll never take coffee breaks, etc., the most important thing is that we have productive output, gotta keep the taxpayers happy. But running a scientific institute in this way (as some would like) is ultimately self-defeating. Allowing people to work how and when they choose and publish when they want to publish does not mean you can't evaluate their performance, but it does mean that you can't reduce them to their number of publications.

Science and the arts have something in common : to do either of them well, you need to innovate, to think creatively. You can't force new ideas by chaining someone to a desk. You can't guarantee that they'll happen at all, but you can encourage them through getting people to talk to each other, by fostering a free-thinking, informal atmosphere. Most importantly of all, they need to feel free to fail, to pursue crazy ideas that might take months and end up being useless. To really innovate, you need to take risks. Grant-based research, which expects this many results about this project, is not suitable for this.
Those societies in which seriousness, tradition, conformity and adherence to long-established - often god-prescribed - ways of doing things are the strictly enforced rule, have always been the majority across time and throughout the world.... To them, change is always suspect and usually damnable, and they hardly ever contribute to human development. By contrast, social, artistic and scientific progress as well as technological advance are most evident where the ruling culture and ideology give men and women permission to play, whether with ideas, beliefs, principles or materials. And where playful science changes people's understanding of the way the physical world works, political change, even revolution, is rarely far behind.
- Paul Kriwaczek, Babylon 
To summarise then, here are a few ways to avoid a false consensus and demonstrate this to the public :

  • Be more interesting and less exciting. The media desperately need to learn that the most exciting solution is not usually the correct one, nor the one favoured by the majority - and those two facts are not unrelated. False excitement is at least partly to blame for this image of the false consensus / closed-minded attitude.
  • Do more outreach. That's always good advice. In particular ram it down people's throats that the findings of science are evidence-based and provisional. Tell people about the methods, not just the results.
  • Teach people about statistics in primary school. When a few scientists dissent from the prevailing opinion or make controversial statements, that does not automatically make everyone else wrong or even more likely to be wrong.
  • Emphasise controversies where they do exist. They are an asset to science, not a danger.
  • Don't rely exclusively on large teams at big facilities. Smaller groups are much more flexible and innovative.
  • End the "publish or perish" culture. More publications does not guarantee better science is being done, and can in fact lead to exactly the opposite.
  • End the grant-based funding process. To be innovative, scientists must be free to take real risks, to pursue projects with no guarantee of success. They must be free to play, to try things on a whim for no other reason than to see what will happen, not because some bureaucrat thought it would look good to tick a box on a form.

Friday 23 October 2015

The Very Interesting Gas That Doesn't Do Anything

"Atomic hydrogen is the reservoir of fuel for star formation." "Neutral hydrogen is a galaxy's fuel tank." These stock phrases are a mantra I fall back on for beginning both public outreach and journal articles. But are they accurate ? A recent paper has got me wondering if we've got something terribly, terribly wrong. It seems that when galaxies merge, although star formation rates increase, the gas really doesn't do a lot at all. It's a bit like pulling the plug out and not seeing the water level drop.


We Need To Talk About Hydrogen - Well, I Do, Because They Pay Me Money To Look For This Stuff, But You Can Just Listen If You Like.

Atomic hydrogen (HI, pronounced, "H one" since it's a Roman numeral 1, not an I) is the simplest element there is. One electron orbiting one proton. That's it. Electrically neutral overall, since the proton and electron have equal but opposite charges, it should be as simple as you can get.

In reality things can be a great deal more complicated if you want to bring quantum into it, which I don't. Anyway this simple picture is good enough for what I'm going to discuss here.
That picture is a little over-simplified for what we'll need though. For example, two hydrogen atoms can bond to form an H2 molecule :

The electrons from each atom are now associated with both protons, forming a covalent bond.
Or the hydrogen can have its electron stripped away by an energetic photon (i.e. light, often near hot young stars) to become HII (confusingly also pronounced "H two"), which is really just a cloud of protons and unbound electrons. It can even gain an electron to become the negatively charged hydride ion, though this is not thought to be important in astronomy.

The current working model (I would not call it a consensus by any means) is that it's usually only the H2 that's important in forming stars. Atomic hydrogen is generally so hot (1,500 - 10,000 Kelvin) that its own internal pressure prevents it from collapsing into stars. Molecular hydrogen is colder, which means it's less able to hold itself up and more prone to collapsing. That view is by no means certain, and there are some hints that in some circumstances, Hcan indeed collapse into stars without forming molecular hydrogen first.

When we go looking for atomic hydrogen, we mostly find it in blue, star-forming galaxies (they look blue because the most massive stars are blue and don't live long).

Random sample of galaxies detected by their hydrogen emission from a large survey. True, many are red, but when you look at them more closely they usually have blue star-forming regions as well.
Looking in more detail, we see "holes" in the atomic hydrogen, where it looks as though the gas has cooled to become molecular and is forming stars. We also see a few cases where there's atomic hydrogen but no star formation. And that's the (simplified) version of why we think atomic hydrogen is usually just a sort of reservoir : ultimately it will cool into molecular hydrogen and form stars, but it doesn't normally form stars directly.




OK, That's Enough About Hydrogen, Please Tell Me About These New Results Now Or I Will Become Cross.

When galaxies merge, the gas collides and things get seriously messy. Not for nothing was a Hubble press release entitled, "Galaxies Gone Wild !"


Needless to say, galaxy collisions and mergers are complicated. But generally if the galaxies both contain gas, collisions result in a massive increase in star formation as the gas compresses and cools. So the atomic hydrogen becomes molecular hydrogen which becomes stars, and everyone's happy, right ?


Oh, would that 'twere that simple. The new paper shows that when you stick two galaxies together, the atomic hydrogen does sod all. Well, it might get splashed about a bit, but its mass doesn't decrease and in fact it might even increase slightly. Which as far as the "fuel reservoir" idea is concerned is like being poked in the eye with a sharp stick.


How Do We Know This ?

Galaxy collisions are slow, grand affairs that can last for billions of years - so we can't just watch galaxies merge and see what their HI does. The metaphor that's usually used is that if you want to learn about how trees grow, you look at many different trees. So it is with galaxies. By finding enough isolated galaxies and galaxies which have already merged, the authors try to look at what happens to the gas statistically.


That's not any easy process. After the merger happens, a lot of information about the original galaxies gets lost. So unfortunately there's just no way to know what sort of galaxies were involved in the original collision. But most mergers are thought to occur between spiral galaxies (because these are far more common, except in galaxy clusters), and spiral galaxy gas content doesn't vary very strongly depending on their precise morphology.

What the authors do is define a sample of post-merger galaxies, then for each of these they find isolated galaxies which have the same mass in stars. Then they measure the gas content of all the galaxies in both samples using existing data from the ALFALFA survey and their own Arecibo observations (They've got me to thank for that since I supervised Derik Fertig (second author on the paper) at an Arecibo summer school. The fact that the other authors are far more experienced senior astronomers has, obviously, absolutely nothing to do with it whatsoever).

What they find is that the amount of gas is the same in both samples. That is, a galaxy with a billion stars that's quietly minding its own business has the same amount of gas as a post-merger of a billion stars. Even though new stars are forming*, somehow the gas just nonchalantly sticks its hands in its pockets and goes, "meh".

* Before the galaxies merged they would have had less than a billion stars in total.


It's not quite as simple as that though. It's not terribly likely that all of the mergers are formed from equal-mass galaxies. And less massive galaxies tend to have higher gas fractions - that is, more gas relative to their stars. So if you stick two unequal-mass galaxies together, and none of the gas gets turned into stars, you'd expect the gas fraction of the post-merger object to be a bit higher that a normal galaxy of the same stellar mass.

There isn't really a handy analogy for this, so let's make one up. And let's bring Worf back into it, why not. Suppose Worf goes to a party at Starfleet headquarters and brings a bottle of strong Klingon blood wine. Captain Picard is making do with regular human wine. When his glass is half-empty, a waiter asks if he wants a top-up. Worf, however, is honour-bound to offer the captain a refill from his blood wine instead. If Picard chooses the blood wine, he'll end up with a stronger drink than if he accepts the regular wine, even though the same amount would have been added.

It's the same with gas fractions. Smaller galaxies are more "potent", they contain more gas per star, so merging them with a larger galaxy is like topping up orange juice with vodka. You'll get a lot more drunk that way.

As long as one bottle is of Klingon blood wine,
obviously.
When reading the paper I thought to myself, "man I could sure use a glass of wine right about now !". But then I thought, "well, maybe you could account for the mergers of unequal-mass galaxies by choosing random galaxies from the isolated sample and adding up their stellar and gas masses". Lo and behold, the authors did exactly that in the next paragraph ! What they found was that the difference in gas fractions of the initial galaxies would increase the gas fraction in the post-mergers quite considerably, not by a very dramatic amount but easily enough that they should have been able to measure it in their sample.

So, does that mean that some of the gas is being consumed after all ? The gas fraction may not have changed, but it is less than if you stuck two unequal-mass galaxies together. Well, maybe, though it seems a bit suspicious that all of those tremendously complex processes that happen during the merger just bring the gas fraction down to that of a normal, isolated galaxy. We had a saying during the undergraduate course on general relativity : it all cancels and equals nought - an awful lot of work needed to find out that nothing's happening.
At this point the team turn to numerical simulations to figure out how much gas should be consumed by star formation. This is perhaps the weakest aspect of the paper. Ideally, they'd set up their own simulations and track the evolution of the atomic and molecular gas and the stars, but this is tremendously difficult to do. We're talking about many months (or more) of extra work, so it's perfectly understandable that they don't do this.

Instead they use an existing set of simulations which are (it must be said) vaguely-described here (again this is understandable since that publication is only a letter, not a full article). What they do is track the total mass in stars, then use the known gas fraction relation (from observations of isolated galaxies) to calculate how much atomic hydrogen there is during the whole merging process (presumably because the simulation itself doesn't distinguish between the different forms of hydrogen we discussed earlier).

What they found by doing this is that the star formation process shouldn't change the gas fraction much at all. So the fact that the post-mergers don't have as much gas as expected can't be due to star formation.


But that assumes that there really is a decrease in the gas content as the galaxies merge. Now I mentioned earlier that there was some suggestion that the gas fraction actually increases a little bit from the mergers. That's a little more tentative. The thing is, not every galaxy in their sample had detectable atomic hydrogen at all, but the detected fraction of the post-mergers was double that of the isolated galaxies of the same stellar mass. That is, if you randomly choose a post-merger and an isolated galaxy of the same mass, you're much more likely to detect gas in the post-merger than the isolated galaxy. Which suggests that post-mergers actually have higher gas fractions than their parent galaxies did.

Another important factor is that the ALFALFA survey isn't as sensitive as we might like. That means it's only detecting particularly gas-rich objects which, say the authors, reduces the expected difference in gas fractions between the isolated and post-merger galaxies - so their calculated differences are probably too large. Many of their isolated galaxies that have no detected gas from ALFALFA probably do contain some gas, just not enough to be detectable. When you run the numbers, say the authors, that means that it's very possible that smashing galaxies together increases both their star formation rate and their gas content.

Which is a lot like pulling the plug out and seeing the water level rise. It's weird.

Believe me, if you Google image search "weird" you'll get far stranger stuff than this.


What ? How Could This Happen ?!? What Does This MEAN ?!?!?!?!

To summarise, it seems that when galaxies merge their atomic gas content remains (at best) unchanged, even though part of their gas is turned into stars. Simulations suggest that the fraction that forms stars is very small, but it looks plausible that the atomic gas content actually increases, somehow.

One thing this study doesn't look at directly is the molecular gas, which is what we think is more directly responsible for star formation. Could it be that the stars which form as a result of the merger do so from the existing H2, perhaps due to the shock of the collision increasing the density ? Unfortunately, say the authors, previous studies have found that the molecular gas content also increases during mergers, so, nope.

It just seemed really wrong not to include a picture of the famous merging "Antenna" galaxies in this post somewhere, so here we go.
But before we go saying, "a wizard did it !", the authors suggest a possible explanation for where this extra gas comes from. Galaxies, it's thought, may be surrounded by large halos of ionized hydrogen. Just how much is not known. Normally it could be slowly trickling down, keeping the reservoir's topped up, but during a merger it might cool more rapidly. Simulations say that's possible, so, maybe. Whether this is the correct explanation or if it's due to something else entirely, we just don't know.

If it's true, then atomic hydrogen is just the middle step in the process - a complex system of dams rather than just one reservoir. It's not a totally unprecedented idea, but it will mean quite a rethink. We'll have to wait and see what other observations and simulations have to say : this one study is important, but not enough by itself to prove what's going on.

Then there are elliptical galaxies. They're thought to be formed by mergers, as this rather nice simulation shows :


But they usually don't have any atomic hydrogen. Where's it gone ? And yet, sometimes they do - occasionally they have very large amounts of it. It's a mysterious mystery, right enough. As usual, we need more data. But the more we learn about galaxies, the more complicated they become. Eventually, perhaps, we'll have enough data and good enough theories to have a real explanation. For now, we're still learning. And that's fun.

Wednesday 21 October 2015

Review : Pathfinders - The Golden Age of Arabic Science


Last year I had my views of the medieval Catholic church smashed into tiny pieces by James Hannam's God's Philosophers. So when I stumbled upon Jim Al-Kahlili's book and read the blurb, there wasn't much chance of me holding on to my money.

Oddly enough, this book doesn't challenge my existing views nearly as much as Hannam's does. But then I was already aware that science in the Islamic world was, for several centuries, far in advance of anything the West was coming up with. Although Hannam's book challenges that, Pathfinders reaffirms it. Hannam still did a great job of convincing me that the medieval Church wasn't some terrible system to oppress freedom of thought, but the extent to which the Islamic world advanced science was far greater than anything happening in Europe.

Like God's Philosophers, Pathfinders is very accessible to a general audience and obviously intended primarily for a Western readership. It does have some mathematical parts (even the odd simple equation or two), but there's nothing you can't freely skip.

Generally speaking, it has less of a theological/philosophical bent than Hannam's book (except for bits about the scientific method). Hannam went to great lengths to spell out exactly what the Church and its teachings did to support and suppress science, directly comparing religious and scientific thinking; Al-Kahlili doesn't really attempt this. For me the book suffers a little because of this. As an agnostic astronomer I work with those of moderate religious leanings on a daily basis, and in the current social climate I don't think it can be emphasised enough : the religious moderates pose no threat to science whatsoever.

The book's main problem is that it's slow to get going. Not until page 62 do we actually get anything about the science that was being done in the Islamic world. Until then we're given (with one very important exception) a not particularly interesting potted history of the East and the rise of Islam, and there's very little attempt in the rest of the book to relate the science to the political context. It would have been far better to reduce this to ten pages or less, or mention the politics in passing throughout the whole book.

But when it does pick up, it's a very good read indeed. Since it doesn't go much into the philosophical side of things too much, I'll just give some brief highlights of stuff I learned :
  • "Gibberish" is a term originating from the 8th century (al)chemist Jibir ibn Hayyan. Although there was no distinction between the two subjects yet (though there soon would be), Jibir "stressed careful observation, controlled experiments and accurate records." He pioneered techniques in crystallisation, distillation, and evaporation. Unfortunately he was apparently a monumentally bad communicator, hence the modern term.
  • 9th century Muslim scientists weren't content with taking ancient Greek texts on faith : they made their own observations to check and improve upon them. They accurately measured the circumference of the Earth (the idea that ancient peoples believed it was flat is pure nonsense) and used an improved knowledge of geometry and mathematics to make massive improvements in cartography.
  • The study of optics in the 10th century was done in such mathematical detail that it wouldn't be improved upon until Newton came along 600 years later. Islamic scientists discovered Snell's Law (no mean feat), understood how light, mirrors and lenses work well enough to design "burning mirrors" to focus light, as well as how the human eye perceives the world. They were even engaging in debate as to whether light is particle or a wave, something which is still not really settled today.
  • Arab mathematicians pioneered the use of zero, the decimal point (it didn't catch on, the numerical system in use was far different to that we use now), algebra (an Arabic word) as a means of general problem-solving, and were deriving formulae as complex as the cosine rule. It may be only GCSE maths today, but deriving it from first principles ? That's tough.
In terms of explaining the science that was done in the Arabic world, Al-Kahlili does a faultless job. He also clearly explains how and why Islamic science developed : the translation movement. The early Arabs became obsessed with translating Persian texts into Arabic, which included many earlier Greek texts. The reasons were a mix of political (wanting to integrate with their Persian subjects), mystical (astrological - interestingly, as with chemistry and alchemy, there was a clear divide between astrology and astronomy far earlier in the Islamic world than in the Christian), and practical (e.g. knowledge of geometry for engineering projects). And he puts forward a very reasonable case that Islamic science was ahead of (or at least on par with) European science for rather longer than is generally supposed.

There are a couple of areas where he's far less clear and/or convincing. The first is how important the "Dark Age" Arabic scientists were in influencing medieval Europe. There came a point in a book where he mentioned that he hoped he had by now made this clear, but it came as a bit of a shock to me. If that's a goal, he needs to use far more examples and explain them in more detail. It felt to me like the book only really hinted at this, except at the end when he does describe in some detail how useful Arabic maths was to Copernicus and Galileo.

The most disappointing part of the book was the severe lack of explanation as to why Arabic science fell (Hannam's book was better in this regard, but still needed more of an epilogue). Strangely, he doesn't seem think it was due to a rise in conservative, fundamentalist Islam, for reasons he doesn't make clear. Sure, he dismisses the argument that Al-Ghalazi alone was responsible for turning the tide - fine. But today in many Islamic countries we do see (as Al-Kahlili himself says) more fundamentalism, and he fails to address how this came about.

More convincing is his case that the Mongols weren't responsible, since science continued to flourish in other parts of the Muslim world long after their devastating invasions. He also dismisses Western colonialism, since Arabic science was already in a crappy state by the time Europeans arrived to cause trouble (though he notes that it was in Western interests not to educate their new subjects about their own enlightened past).

But the main process he chooses to blame looks very odd indeed : the invention of the printing press. OK, it was difficult to use this for the Arabic script, and early mistakes by Westerners in defacing the Q'ran meant it was rejected for a long time. But that only answers why Western science advanced; it says absolutely nothing about why Arabic science declined. Interestingly, he hints at a shift away from blue-skies research to the purely practical applications (which should serve as a warning to anyone still stupid enough not to understand the value of pure research for its own sake, see below). But how and why this happened are, frustratingly, left completely unanswered. To me, this shift in religious thinking from liberal to fundamentalist is one of the most important aspects of books such as these. A thousand years ago fundamentalism did not, as we are so often taught, dominate thinking in either the Christian or Islamic worlds - where did it all go wrong ?


I've said a lot of negatives, but the truth is I really enjoyed this book. What it does well, it does very well - and its faults are generally things it's lacking rather than things done badly. But I want to end on a positive note, and what this book does best of all is demonstrate unequivocally that the Islamic world pioneered the modern scientific method. A series of wonderful quotes sum things up nicely :


Or to put it another way :  screw you, Islamophobes ! But if you refuse to listen to a religious icon, try a scientist instead.
“The seeker after truth is not one who studies the writings of the ancients and, following his natural disposition, puts his trust in them, but rather the one who suspects his faith in them and questions what he gathers from them, the one who submits to argument and demonstration and not the sayings of human beings whose nature is fraught with all kinds of imperfection and deficiency. Thus the duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and, applying his mind to the core and margins of of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.”
-Ibn al-Haytham, c.1025 AD.
If there is a battle-cry of modern science, it is something like, "Rrraaaaarrr ! I don't trust you, myself, or anything except evidence, and even then only provisionally !". Which doesn't sound like a very dangerous battle-cry... until the atomic weapon takes you out from orbit. al-Haytham clearly understood the methodology of science as well as any modern scientist.

One final quote demonstrates that not only did these early scientists understand what they were doing, but they understood why they needed to do it : not only for practical reasons, but for the sake of (for want of a better word) the soul :
"The stubborn critic would say : 'What is the benefit of these sciences ?' He does not know the virtue that distinguishes mankind from all the animals : it is knowledge, in general, which is pursued solely by man, and which is pursued for the sake of knowledge itself, because its acquisition is truly delightful, and is unlike the pleasures desirable from other pursuits. For the good cannot be brought forth, and evil cannot be avoided, except by knowledge. What benefit then is more vivid ? What use is more abundant ?”
-Al-Biruni, c.1000 A.D.
I can't add anything to that. All that remains is for me to recommend anyone who thinks they might like this book to buy it immediately, and I hope that eventually he'll team up with Hannam. Together they could produce something really potent. Given the rise of Islamophobia and the ravings of certain educated people who should know better, it can't come soon enough.

Thursday 15 October 2015

Not So Open

What if my brains fall out ?

Human beings are incredibly powerful paradox machines. There is no lower limit on how absurd, self-contradictory, and downright stupid our most violently-held beliefs can be. Despite overwhelming evidence, people still insist that the Moon landing was a hoax, the Earth is flat, aliens are travelling trillions of miles to mutilate cows, anything that's natural must be good for you, vaccines cause autism, the Moon is a hologram, etc. etc. etc.

Actually the massive rocket was spreading the chemtrails that
convinced everyone it was going to the Moon.

While science is fundamentally about doubt rather than skepticism in the usual sense, you can't question everything. If you doubt everything, you end up learning nothing. Sometimes you have to draw a line and say, "I have enough evidence, I shall believe that this is true" even when you don't have 100% proof - because if you don't do this, you'll never move forward. Or as the old saying goes, "Keep an open mind, but not so open your brains fall out."

Perhaps a slightly better way to put it would be to paraphrase a famous political comment : you can question some of the things all of the time, and all of the things some of the time, but you shouldn't question all of the things all of the time.

Total doubt and total confidence have the same end result : they prohibit learning. So, today's issue is : when should you draw the line ? When is it a good idea to stop (or at least temporarily suspend) questioning things for the sake of maintaining sanity, and how do you avoid the trap of becoming dogmatic ?


Facts are facts... aren't they ?

The most important point is that unless you have irrefutable proof, you never trust your assumptions completely. Irrefutable proof is rare, arguably impossible. It's always possible that people are just making stuff up. But, as a rule of thumb, if "you're lying !" is the only accusation you can make against an argument, you've probably lost. It doesn't matter what evidence I produce to support my position, you can always, always, always turn around and say I'm a liar. Or : the evidence is fabricated. Everything is photoshopped or due to mind-controlling drugs like, oh, I don't know, fluoride.


Although the "it's LIES, I TELL YA !" argument is very, very annoying, we must establish what we mean by "facts". Let's take a simple example that's way beyond climate change in terms of sheer absurdity. Suppose I tell you that fish are entirely fictional. They've never existed. Fossils ? Made by the government. Yesterday's lunch ? Cleverly-doctored, delicious halloumi. That thing swimming around in your pond ? That's what the government's mind-control drugs are making you think.

IF YOU TELL ME I DON'T EXIST THEN I WILL CUT YOU.
One could argue that because of "logic" like this it is never possible to establish anything with 100% certainty, and that everything is merely a belief and there are no facts. And perhaps this is correct, but it is not scientific. The scientific method assumes that the world is real (not a simulation or hallucination of any kind) and governed by inviolable laws. Thus what we see and measure are facts. I see a fish => fish exist, that's a fact, end-of. Or more accurately, multiple observers document their fish-sightings under carefully controlled laboratory conditions and thus we establish the existence of fish with what we call certainty. If you don't believe in this most basic assumption, then as far as science is concerned :


If the world was a simulation of some kind, or everyone was lying the whole time about everything (more on that later), then anything could potentially happen at any moment for no reason. That would mean that logic itself is utterly pointless, so we might as well all give up and cry. Maybe the Universe isn't real or predictable, but that doesn't help us analyse it. If you believe this, then you've gone beyond the remit of science : you're not even wrong.

...which is not to say you are wrong, exactly, just that your arguments can't be validated scientifically. Debate your position with a scientist and you'll find that they are literally incapable of remaining rational. It's a little bit like what would happen if a sports journalist started asking a golfer about their team's strategy to get the ball in the net, or, better, how they thought a free trade agreement would affect the mating habits of sea turtles. It's a case of, "why are you asking me ?"


All the scientific disciplines believe in this underlying principle without question, simply for the sake of being able to do science at all. But beyond that there is no particular central idea of science that it's bound to follow. Theories continually change with time; there's nothing that says, "thou shalt not question this model even if thou hast lots of reasons why it is bollocks". With massive irony, people who like to say science is dogmatic often tend to be the ones pointing out that it's made a lot of mistakes in the past, as though that somehow indicates that the process is flawed. Actually that's exactly how it's supposed to work !


Aside from the fundamental tenet that reality is measurable, the "beliefs" of science are evidence-based and provisional. We know that our ideas will change : they are only temporary assumptions, not (or at least they shouldn't be) fervently-held beliefs. So am I saying that actually yes, we should question everything all of the time ? Nope. The facts never change. Our interpretations of what they mean, our theories, are more subtle.


Being pragmatic

It's important to realise that science also has to make other, much less deep assumptions in order to progress. When we assume that processes like sedimentation occurred in the same way in the distant geologic past as they do today, or that radioactive decay hasn't varied with time, or that the laws of physics are the same everywhere, we aren't being dogmatic unless we never test them. Usually these assumptions themselves allow us to make testable predictions which advance us to the next level. If they're wrong, our tests will falsify them.

Making assumptions that some things are true doesn't have to mean that you can't change your mind, it simply means that you've chosen to put the burden of proof onto the opposing viewpoint. Of course the trap to avoid is thinking that your assumptions - especially if they're very good, well-tested assumptions - are actually facts. That's why we test everything... but as individuals we don't have to test all our assumptions all of the time. That approach is wildly impractical and not helpful.

Dogmatism only occurs when you automatically dismiss alternative ideas because you already know your idea is true, and think the alternative is not worth anyone testing at all. This often seems to be a case of people holding their theories above facts*. Relativity can't be right because you think it's weird ? Tough. It is weird, but it's got a heck of a lot of observational evidence backing it up. In the scientific method observations always get the last word. The ancient Greek approach of doing as much theory and as little observation as possible has long since been abandoned on the grounds that it was just... well, wrong. If you want to persuade me that relativity is flawed, you must show me what tests it has failed, not what you don't like about it.

* This article, with scarcely credible contempt for so many theories that underpin the modern world, argues that we shouldn't even bother to test their predictions - it's damn tough to imagine a more dogmatic attitude than that. It is convinced these theories are wrong despite the fact that they have been tested innumerable times and offers almost no explanation why the author thinks they're wrong.
Yeah, I get how you don't like singularities. I don't like 'em either. But why in the world are you so convinced that the Universe has to make sense to you ? Exactly what compunction does it have to do your bidding ? There is no reason whatsoever to assume that the kilo or so of warm, blood-soaked grey goop inside your skull should be able to understand the Universe.


Yet sometimes, at a less obstinate level, refusing to question is healthy. I personally am never going to conduct any tests to see if the Earth is flat because I know it's round. Uncounted numbers of people have already done tests to prove it's round, the only way to get around (ha ha) this is to say, "they're all lying". If you have so little trust in your fellow human beings that you think this many people are lying, one wonders how you're able to get out of bed every morning. You're not in a healthy state of doubt, you are simply paranoid.


Of course when people really are lying it's extra important that this is exposed, but you need strong evidence that this is the case. "You're a liar !" is a perfectly valid argument, but it should be the final blow, not the opening attack. When people use this as a first response, or if it's the only option left, I tend to stop listening. If you want to accuse me of being a dogmatic round-Earther, go right ahead.

Normally though, even saying, "I believe this is true because evidence" does not mean, "I am certain this is true". I am not certain dark matter exists. I am not even certain the laws of physics are the same everywhere. But I believe dark matter is the best explanation, and I believe the laws of physics don't vary because I see no evidence that they do. 99% of the time in my day job, there's no benefit to me in questioning these assumptions. So I don't, but that doesn't mean I'm going to defend these beliefs to the death. Occasionally I do stop and question them, especially (as regular readers are by now acutely aware) dark matter, but there's no point me doing so all the freakin' time.


Habeas Corpus, On The White House Lawn If Possible

When I was younger, I read a lot - and I mean a lot - of books on the paranormal, from aliens to ghosts, lake monsters and ESP, everything. I've still got tens of books on that stuff lying around. Don't tell me I haven't looked at the evidence, because I have - extensively. What convinced me that all of this stuff is not worth pursuing is that the evidence just never seemed to be that great or ever getting any better. Aliens always seemed to be determined to remain secret (why ?) but happily chose to reveal themselves to some redneck farmer who'd let them take wonderful pictures of their spaceship but never of the occupants. More often even the spaceships were nothing more than very fuzzy blobs in the photographs.

Photos never got any better come the digital age either.
After this extensive background reading I now tend to dismiss any claims of flying saucers out of hand. I'll believe if one lands on the White House lawn, but the argument, "I'm a dude on the internet, trust me !" carries no weight with me. As far as I'm concerned, this is another hypothesis that has already been tested extensively enough that it can be dismissed until much stronger evidence comes along. If other people want to research UFOs, that's fine with me - I just don't want to get involved with this personally, thanks.

Similarly, when something like the EmDrive or cold fusion comes along, the claims tend to be a case of betting on a barely-measurable detections over decades (even centuries) of established results. It's not that scientists will never accept the result. It's just that we require gold standard, "spaceship on the lawn" level of evidence. For most of us it's simply a pragmatic approach to ignore it until that comes along.


I don't have time for this. I need to know what to believe !

Not everyone is prepared to read up on UFOs. Similarly, not everyone has the inclination or indeed ability to become a professional scientist (likewise I have no ability to become a professional bog-snorkeller, swimsuit model, or a toaster). And therein lies the danger : the modern world is highly dependent on and yet very suspicious of science. It doesn't really matter if you believe in flying saucers or not; it does matter if you refuse to believe in the benefits of vaccines.

The thing is, in most ways I'm not a scientist either. I am not a climate scientist, but I believe global warming is likely mostly the result of humans. I am not a biologist, but I believe vaccines work. I am not a surgeon, but I know surgery works. Nor am I a chemist, but I'm pretty sure dynamite works. And I'm not an electrician but I can still use the internet. Is it dogmatic of me to trust the experts on so many issues about which I'm genuinely no better informed than the average man on the street ? No, because I do understand how the scientific method works.

If you want to persuade me that an argument is false because the method isn't being followed correctly, you might have some success. But if you want to persuade me that the method itself is fundamentally flawed, you're basically arguing against every piece of technology in the modern world. Good luck with that.
It's that critical step of "observation" which is so important. Simply put, it isn't dogmatic to believe that well-tested things are true, as long as deep down you reserve at least a small level of uncertainty*. I also know that the results have been tested repeatedly, and if you don't believe one expert, you probably should believe a thousand.

* Well that's true for theories (well-tested but not proven models) at least. For true facts (i.e. the Earth is round) you can abandon all uncertainty. You can't be dogmatic about facts even if you want to.

Really, the level of trust we're asking for is no different to what everyone accepts in everyday life. We all give money for goods and services expecting that the guy behind the counter won't just run off with it. We all get into planes expecting that they won't crash. We all live in houses expecting that they won't just collapse or catch fire or suddenly turn into jelly for some reason, trusting that the builders have done their job correctly. We cannot be certain of any of those things, but if we doubt them all the time we'd end up cowering in a hole or wrapped in bubble wrap or something.

Of course, keeping the possibility of a terrible accident in the back of your mind is perfectly sensible. Ordinary doubt is a very good thing indeed - it's paranoia that's the problem.

So we should all just shut up and trust scientists unconditionally, then ?

Just like with UFOs, the reason I support virtually every mainstream established result is because whenever I've looked in detail at the alternatives, I've found them wanting. Every. Single. Time. If you think it makes me some sort of dogmatic acolyte, I don't care. It's not my fault if I find the mainstream, evidence-based arguments genuinely convincing.

The great thing is that you too are free to examine those findings for yourself, and if you do so carefully and without bias, I honestly believe you'll be in favour of the mainstream results as well. At least in astronomy, it's easier than ever to get unfiltered access to the original results and even the raw data. If you doubt the findings when you first hear them, that's great ! But if you don't go on to examine things in more detail and still persist in insisting that the results must be wrong, then it is you who is being dogmatic, not the researchers. And if you want to spread your anti-science via the internet, at least take a moment to consider the tremendous irony.

They say that in the age of information, ignorance is a choice. And that's true, but at the same time it puts a burden - nay, a duty - on scientists to communicate their findings as clearly as possible to the general public. We have so much information available to us that sifting through the worst of it is undeniably difficult, so any real scientist able to do outreach bloody well ought to*. It's those on the coalface who are best able to judge the strengths and weaknesses of current research.

* Outreach is a skill like anything else. There are plenty of good scientists who are so monumentally bad at communicating that they should be locked in a broom cupboard whenever there's any chance they might interact with a non-scientist. And of course by "able to" I mean, "allowed to as part of their job". We can't force people to do this on their own time.

This is more important than ever because it's becoming increasingly difficult for the public to directly test contemporary research. The days of a lone genius making some breakthrough discovery in a shed are not over, but nowadays testing the theory can require a billion-dollar particle accelerator. If the public can't test the results for themselves, then we at least need to do everything we can to explain what we did and how we did it as clearly as we can. Understanding the scientific method and the philosophy behind it is far more important for establishing trust than the result itself.


Aaaargh ! I'm very confused ! I just want answers !

I understand why this can be confusing. Most people prefer a clear-cut, "yes or no" answer to questions. It just isn't really like that in science, where we've got this odd mix of hard certainty (true facts), hypothesis (models consistent with limited observational data) and theories (very well-tested models). Both of the latter can be disproved, and many have been throughout history. Yet sometimes, just to make it more confusing, we act as though our theories are facts, even though we know they're not !

We assume theories are correct as a matter of convenience. Sometimes, yes, we do go too far. Individual scientists are indeed capable of clinging dogmatically to disproven ideas. But the wider scientific community is more robust than that. One of the best articles in terms of science communication I've read recently is the BBC's "We want to break physics" :
"The data so far has confirmed that our theory is really really good, which is frustrating because we know it's not !" Prof Shears says. "We know it can't explain a lot of the Universe. So instead of trying to test the truth of this theory, what we really want to do now is break it - to show where it stops reflecting reality. That's the only way we're going to make progress."
YES ! That's perfect. We know out theories are doing a good job - otherwise we'd already have thrown them out - but we also know they aren't perfect.

The best way to change a scientist's mind is to buy them a drink and slip them a bribe... err, I mean, give them hard evidence that they're wrong. If you have some objection to a theory on the grounds that it sounds implausible, that won't get you very far. The Universe is not necessarily a sensible place, and the only way to test how ridiculous it is is through observational testing. If you haven't got that kind of evidence, then I for one will stick my fingers in my ears and sing, "la la la I'M NOT LISTENING !" because the number of alternative, contradictory theories out there is far beyond my capacity to analyse. On the occasions I've looked at them in detail, I've found them to be self-evidently flawed or simply lacking any advantages over mainstream theories.

If it helps, science rarely offers true answers : but it can give you sufficiently-good information that you're able to make a decision. Put it like this - if you want to design an aircraft and risk hundreds of lives, you're better off relying on aeronautical knowledge than crystal-gazing. Both might be wrong, it's just that the chances of the scientific approach being wrong are waaaay lower than staring at a chunk of glass. The difference between the complex equations of mathematics and the arcane symbols of the occult is that the mathematics actually works reliably, repeatedly, and doesn't depend one jot on sacrificing chickens unless the mathematician is hungry and partial to KFC.



Summary
  • Science assumes the world is real and governed solely by physical laws. If you believe there's more to it than that, then fine, but that position cannot be analysed scientifically.
  • Scientific facts are established through multiple observations. We require only the most basic level of trust that everyone else isn't lying about them. Doubt is good, paranoia is just another form of certainty.
  • People who do yell "it's a conspiracy !" tend to be the ones who think scientists are dogmatic, yet when scientists do change their minds they either ignore it or shout loudly about how scientists had been believing nonsense for years because of their dogmatic attitude.
  • Those same people also tend to criticise scientists for testing theories they believe are wrong, apparently oblivious to how incredibly closed-minded they've become.
  • You can't be dogmatic about facts. You can, potentially, be dogmatic about theories. However, the position, "I shall assume that this is true until someone gives good evidence otherwise" is not the same as saying, "I'm completely certain about this and I'll never change my mind - in fact I shall now start a crusade to disprove all other viewpoints, ahahahahaha !"
  • It's OK to assume that something is true, temporarily ignore the doubters and just get on with your damn job for a while, provided that every so often you stop and check what you're doing. You don't have to check every assumption all of the time - indeed, often the research itself will uncover a flaw if one exists. You just have to be prepared that this might happen.
  • Generally it's safe to ignore people claiming that things are wrong because they don't like them (rather than presenting actual evidence), or claiming from the off that people are lying, or (99.9999% of the time) that they've found an "obvious flaw" in a theory. Check what they're saying once in a while, but if you do so every time you'll quickly find yourself going insane and that's not good for anyone.
  • If you're looking for solid answers, forget it. Science isn't so much about getting the "correct" answer as it as about being able to make the best decision possible based on usually limited evidence. Aside from measured facts, scientific theories are only ever a guide to the Universe, a way of making decisions, not a decree of Ultimate Truth.