This is much too simple unless you're 12 and trying to make a lemon-powered clock or something. The internet throws up lots of variations on this theme, some of which are better than others. Having a penchant for silly internet memes, my favourite is this one :
The key difference is that this is a loop, not a line. Which is very much better, because it's extremely rare for actual research to ever produce a truly decisive yes/no answer. But just how messy does this get ? This meme is a very nice simplification, but I think we can do better.
Well, one step that it's missing is peer review (or more generally, discussions). You have to communicate both the experimental procedure, the results, and your interpretation to at the absolute least one other recognised expert in the same field*. No-one is smart enough to spot all of their mistakes, nor is anyone trusted enough to do so. Peer review is very far from perfect, but it's better than not doing it at all.
* Some would argue that without peer review you're not even doing true science and that you can't even have individual scientists. That's much too extreme, but it is an essential part of modern scientific practise and not a peculiarity of academia.
Here things immediately get messy because there are are several things which can happen :
- The paper is accepted. This basically never happens because referees like to make themselves feel important.
- The paper requires modifications but is basically OK. This can be as minor as fixing the typos or as major as re-doing the analysis and changing the conclusions.
- The paper is rejected. This does happen, sometimes for good reasons (i.e. the authors have made fundamental mistakes) but also sometimes for very bad ones (i.e. the reviewer has made fundamental mistakes, or is just an asshole). In this case the authors have two options :
- Capitulate. Ideally this only happens if the authors have made a terrible mistake and realise it.
- Fight back. Ideally this only happens if the reviewer has made a terrible mistake and the author's realise it. This can involve appealing to the journal editor, finding a different referee, or even submitting to a different journal. In the worst case, the authors may resort to institutionalised bribery to get their work published in a disreputable journal.
OK, let's try and re-work the meme to account for this :
I've also altered some of the arrows. You can go straight to "reject hypothesis" from "results" without ever publishing them. This is not necessarily good practise, but it does happen - partly because publishing every single bad idea just isn't practical. And once you "reject hypothesis" you might just give up entirely, so you start over. Or you can end up in an infinite loop with a reviewer who will keep telling you to make changes.
Then there's the whole issue of interpretation, which the original meme neglects. One of the hardest lessons I've had to learn from the process of doing research is that instead of confirming or denying my original hypothesis - assuming I had one at all, but we'll get to that later - is that most of the time the results do neither. They imply something completely different instead : usually a case of "not even wrong" for the original idea. They may or may not actually tell me something (else) interesting about the Universe if I think hard enough.
You can't usually just go straight from "results" to theory or rejection as in the original meme. Often, the results are so different that they set you on a new line of inquiry altogether, and it's very important to distinguish this from the more straightforward process of testing your original hypothesis. And sometimes you realise that the whole line of inquiry was genuinely pointless. So it's probably something a bit more like this :
"Theory" is a difficult term. The internet has it that scientists use it in a very special way to mean an extremely well-tested model. And we do... sometimes. But we also use it in the everyday speech in exactly the same way that everyone else does - any sort of model, no matter if it's been tested at all. In fact we often explicitly say, "very well-tested" just to make sure everyone's on the same page. There's no rigorous definition of exactly what "very well-tested" actually means in practise anyway. But I shall keep the label "theory" to distinguish it from "hypothesis", though the impression one gets from the loop that withstanding a single test is sufficient to call it a "theory" is not really the case at all.
|Or a theory.|
Sometimes we can also skip the need for additional experimentation - there's a very fuzzy line indeed between "prediction" and "explanation". Ideally a theory should predict something that hasn't been observed before and a new experiment or observation is run to test it. But if someone realises that a theory also explains an old result, that counts in its favour too.
Although I'm keeping the label "theory", "experiment" has to be changed. Experiment suggests carefully controlled conditions which can be manipulated to the scientist's whims. For observational astronomy this is utterly impossible - we can't influence a distant galaxy in any way, although sometimes we might like to. At least we can plan which object we want to look at and how we want to study it. Palaeontologists don't even have that luxury, some of the time.
|"NO GALAXY IS THE BOSS OF ME !"|
Oh heck, this is getting complicated. Which is good, because that's what real science is like. But there's one absolutely massive thing we've left out, that was in the very first kiddy version but not the later ones : background reading ! Oh noes ! You're utterly doomed if you charge in where angels fear to tread without doing background research first. If you have a really good idea, the chances are that someone else has already tried it long ago.
Background reading is a hugely important part of the whole process. It can generate new ideas or instantly falsify your hypothesis. So we've got to link this to several different places. Fortunately, this is about as refined as we should try and make the meme, so let's also tidy things up a bit and stamp my website on it in the vainglorious hope of generating more hits.
|Full size image here.|
EDIT : For those sadistic enough to care, here are some further modifications that would improve things still further :
- As pointed out in the comments, "Reject hypothesis" should lead to another "Interpretation" that leads either to "Theory still valid" (Captain Picard Full of Win meme) or a new "Reject theory" (probably also Picard Full of Win, or some such). "Theory still valid" would then lead to "Theory" (because you can just consult the original theory again and see if there's something you missed), "Observation", "Question", and "Hypothesis".
- "Background reading" needs to have a direct two-way link to "Idea". "Let's do math" should have a two-way link to "Hypothesis" and a direct one-way link to "Prediction". And there should be a whole lot more "Background reading" scattered liberally about the place, at the very least one going directly to "Theory".
I'll wait a while for further comments and then consider making some edits.
Conclusion : Fifty Shades Of Science
You might be wondering how anyone could follow a chart like that in practise. Of course, they don't - it's just too complicated. My intent here is not to produce a definitive "how to do science" guide, because that's impossible. Nor do I wish to undermine the original meme I based this on, because the simplified version has its uses. Rather I want to emphasise that science is not always such a rigid, fixed process. You adapt it to fit the purpose.
Many scientists do a little of everything at some point in their careers, but often with a strong bias in one direction or another. Some observers are happy to do nothing but examine and catalogue, rarely constructing hypothesis or mathematical models. Many theorists see observations as dirty, vulgar things, and observers as nothing more than glorified photographers, whereas observers see theorists as geeks who spend all their time playing computer games.
|I've made my choice.|
Some people become uber-specialised, who know a single instrument or code inside out and backwards, but take them out of their comfort zone and they collapse. Others know a little of everything. A few people do go through the entire processes, but hardly anyone at all goes through the entire loop every time. This is another reason why science is a fundamentally collaborative endeavour - even if you don't like talking to people, you benefit from their findings and they from yours. This collaboration knits all the different techniques together, so that overall we end up with (or rather, hope to end up with) a system that's much more powerful than the sum of its parts.
I've alluded to this already but it's worth emphasising : not all investigations start as some part of this larger loop. Sometimes whole new lines of inquiry come from sheer blind luck. And I simply cannot resist quoting the report of one referee on a telescope observing proposal who apparently just did not understand this at all :
Apparently, making new discoveries is a bad thing ! Yes, we can always make new and unexpected discoveries with enough observing time - which is a very good reason to approve of large surveys, not dismiss them. But I digress.Yet another HI galaxy survey which wants to go deeper and open up new parameter space in the Virgo cluster. A complete census of HI in galaxies in Virgo and perhaps HI streams and other clouds is promised. The scientific goals are reasonable enough, I suppose. But the real question is the investment in telescope time. You can always find new things if you spend enough time observing.
The main point I want to make is that science is a very different process to the one taught in high schools. As I've written before, it has a lot more similarities to the humanities and particularly the arts than is often appreciated. Yes, it deals in hard facts. But the interpretation of those facts, when it comes to front-line research, is every bit as subjective as the beauty of a Shakespearean sonnet.
The difference is that science makes testable predictions, and it has that all-important "reject hypothesis" scenario. This is no small difference - but the similarities matter too. It's not a black-and-white case of "scientists baffled" versus "mystery solved", whatever the popular media might say. Which matters a great deal, because if you see scientists continually getting things wrong without understanding why that that's integral to the process, of course you'll see them as untrustworthy idiots. Getting children to do experiments is one thing. Getting them to understand that there might not actually be a right answer at all - just the best answer that's possible given the available data - is quite another.