Science is about working out how the world actually functions. It doesn't have any truck with hocus-pocus mumbo-jumbo or fairy stories about some magical deity running the show - it's about solid, hard facts, evidence and proof. Nice, comfortable, iron-clad certainties all the way ! On the other hand of course science doesn't actually know anything at all because you can never prove a theory, you can only prove what
doesn't work.
Both of these are extreme positions and like most extreme positions they are extremely silly. The first is typically adopted by those with an anti-religious agenda; the latter by those who are unduly skeptical, overly-enamoured of pure theory and those who don't seem to realise that science is not just about disproving things but that the whole point is to try and figure out how the world actually does work.
In fact, contrary to what many people believe, yes you
can prove a theory - provided you're careful to define just what you mean by "prove". And "theory" for that matter.
Alternative Facts Are Just Lies With Better Marketing
Conventionally, evidence is not the same as proof. Proof, in the strictest sense, would mean absolute, inviolable certainty. Once something has been proven, it can never be un-proven. Proof, like pregnancy, is binary - you either have proof or you don't. Evidence, on the other hand, is merely information consistent with one model while disfavouring others. You could have mountains of evidence supporting one idea but not actual proof. Evidence is a continuous variable - you could have just a little bit or a whole overwhelming mountain of the stuff or something in between. Sometimes it's hard to choose between ideas if they have equal amounts of supporting evidence, but this does not mean that all ideas deserve equal weight.
In the strictest sense, absolute proof of
anything is impossible -
even mathematics. One can always fall back on the idea that all of
reality is illusory and we're all being deceived by mind-altering drugs, or some godlike creature running a simulation - we can't trust our senses or our memories, not completely. And this is true. Everyone should stop to think about these ideas from time to time... but in everyday life, it's bloody useless. You can't go around doubting whether your socks are really real or if gravity exists.
So for the purposes of both everyday linguistic simplicity and scientific inquiry, we make some simplifying assumptions. We say that we can trust our senses and memories
just enough - we might have to keep checking, but they aren't being continuously deceived. They might not give us a full view of the universe, but what they do tell us in in some sense real. The world exists objectively of our senses and we can, if we're careful, measure it with something approaching objectivity. It is real, not an artificial, whimsical construction that could be shut off at any moment, and it doesn't depend one whit on our subjective feelings.
Questioning this most basic of all scientific assumptions isn't really much use outside the philosophy classroom but it's good to be aware of it. The kind of people who insist that science is all about facts and certainty have forgotten this, and usually go several steps further.
But
within this assumption it's obvious that there are things called "facts" which can in some sense be meaningfully "proven". There is something called a "wall" which if I bang my head against really hard will
hurt, dammit. But what is a wall ? Ahh, there things get complicated. Modern theory has it that the wall is made of atoms consisting of protons, neutron and electrons. Electrostatic forces between the atoms of my head and the wall prevent the two from overlapping. Perhaps, one day, our theories about the nature of matter will change - but I can state
with certainty that something called a wall will still exist. Its existence has been proven. It is a fact. Just as something called gravity definitely does exist, but our ideas about it have
changed markedly over the centuries.
So long as you accept (at least the assumption of) reality as objective and measurable, you will inevitably have to deal with facts. Facts are easy. I measured
this temperature when I stuck the thermometer up my cat's backside; I saw
that many people at Trump's inauguration*. But facts are snapshots - they tell us exactly what reality was like in extremely specific moments and circumstances. They don't tell us, by themselves, how the world actually works or what's going to happen next. Facts describe how things
are, not the underlying causes of change.
* Finding the connection between the temperature of a cat's anus and the number of attendees at Trump's inauguration is left as an exercise to the reader.
For that we need a model, a hypothesis, a
theory. Some description of the way the world works that agrees with the facts and lets me predict what will happen in the future (or other circumstances) provided I have sufficient information. Hypothesis and theory are terms which are often used interchangeably, but it's useful to differentiate : hypothesis generally means a model which has only a little supporting evidence (or none at all but is at least consistent with the established laws of nature), whereas a theory has lots. There's no rigorous definition of what constitutes little or lots, but the relative difference is important. And for that matter scientists
don't always use these terms very strictly, but I will do so here in order to make things as clear as possible.
So given that there are hard facts, and models make predictions about facts, can we ever prove a model is correct ? As a hypothesis gathers evidence and becomes a fully-fledged theory, can it ever accumulate so much evidence that it transmutes into true proof and the model itself is accepted as fact ?
Yes, it can. But the limitations of that theory must be very, very clearly specified.
You Can Prove A Theory, But Don't Be Hasty
|
This little dude makes for a surprisingly good metaphor. |
Suppose that you leave a big plate of delicious chocolate chip cookies out on a table. Every day for a week, you find there's one missing and a trail of crumbs leading across the floor into a little hole into the wall. Your conclusion* ? A mouse has been stealing your cookies ! The little furry bastard ! That's a very solid hypothesis : it fits everything you know about the behavioural characteristics of mice and the structural properties of cookies. You haven't actually seen the mouse though, so it's not a theory. Maybe it's a rat. Or a snake. Or a particularly industrious snail or a very small elephant. These hypotheses aren't as good as the mouse hypothesis but in a strict sense they are still valid.
* Wait, conclusion ? Yes. A conclusion is just what you think is going on, regardless of the strength of the evidence. It's usually, but not always, associated with a chain of logical reasoning.
|
"These are small, but the ones out there are far away". |
So you stay up one night, keep very still and wear some night vision goggles. And lo and behold, you spot the mouse !
Your hypothesis has now become a theory. The evidence supporting it isn't merely consistent with your pre-existing knowledge of how the world works - it's independently verified and extremely strong evidence. You could get even better evidence with a video recording, so that you can check it repeatedly and show it to your
skeptical colleagues.
You might wonder if your theory is now already a proven fact. Well, no. You've only observed the mouse this one time. Even if you keep observing the mouse time after time and keep records with a thermal-imaging camera and get multiple witnesses (wow, you're really going for this !) you won't ever be able to
prove it was the mouse that did it before your observations began. So you're proving there's a mouse that is
now stealing cookies; you have "only" a theory that the mouse
has been stealing cookies for the past week. The oft-derided term "only a theory" is both right and wrong here : a theory is not as good as proof, but it's still better than a mere hypothesis.
Of course, this particular theory is really an excellent theory. There's not really any sensible reason to doubt it - you have a great explanation for the current observations and no evidence that anything else was ever at work. The only way to prove it, however, would be if you suddenly discovered a CCTV camera had recorded footage of the entire week showing that the mouse was entirely and solely responsible. Otherwise it's still just about possible that the mouse only stole one cookie and the rest were taken by a snake wearing a tutu.
|
A hat is fine too. |
This means your theory has retroactive predictive power - it predicts what was happening in the past. If you're very lucky, it's testable - you have to have a source of observations for the previous week. So your theory that a mouse ate all the cookies is proven; it has transmuted from a model into solid observational fact.
But you can extend this to predict what will happen in the future. In this case, your theory
immediately becomes unproven. You can't travel into the future, observe the mouse and travel back again - and if you did, you probably shouldn't be allowed a time machine because you're much too boring and responsible. In a sense,
all theories about the future* are
inherently unproven by definition**.
* Or indeed in other rooms with mice and missing cookies right now. The important point for predictions is not that they predict that things will happen in the future but that they state what new observations will uncover, even if those observations were taken years ago but never examined.
** Well... within reason. If your theory depends on only a very limited number of variables you can get away with calling it proven - as we shall see, few would argue that we can't prove the existence of gravity or reliably extrapolate its continued existence in the future.
But your theory is only unproven for now, it is not
inherently unprovable. You just have to wait and see what happens. However, because you can't guarantee exactly what will happen in the future, you need your predictions to be more specific. You already know the mouse was alive in the past to eat the cookies (and you only specified
a mouse, not a specific mouse) but you can't be sure how long this mouse will live. So your prediction, for a provable theory, would be something like, "a mouse will continue to emerge from that hole and steal cookies provided everything remains as it is now - that is, there must be at least one mouse who lives in that hole who is willing and able to eat cookies".
This safeguards the crux of your theory against unforeseen eventualities. If you'd simply said, "the mouse will keep eating cookies", the mouse might not be hungry for a day or so, or die, or the hole might be blocked up or the whole house demolished. Your theory would be disproven, but then your theory was pretty stupid because it assumed the mouse was able to magically overcome impossible difficulties. It would even have to invoke necromancy if you don't specify that only living mice will eat the cookies.
|
This is of course the Death of Rats, but don't worry, he also does mice. And gerbils. |
Of course in everyday life being so specific is not necessary, because we all have common knowledge and assumptions that we don't actually need to state so explicitly. Not so in scientific research where the whole point is that we don't yet know the full story. Here it helps to be as specific and explicit as possible - otherwise a single contradictory data point could be taken as falsifying your theory even though the core of it was basically correct.
To be useful, a theory should have both general and specific components. If it only works for a unique case - only a single mouse in the entire world likes cookies - it's not really going to have much impact. Rather it should say that some fraction of mice like cookies and are able and willing to eat them in the right situations. That way you can apply this to different situations and make a useful prediction : there's a mouse here but there's also a cat, so the mouse probably won't risk stealing food unless starving or the cat leaves. Hence the "mice eat cookies" theory isn't wrong, it's just that the complexities of it mean that not all mice eat all cookies in all situations.
In simple hypothetical examples, it's not so difficult to turn a theory into a fact. This raises two questions : does a theory automatically cease to be a theory when it's proven true, and can this really happen in practise ?
Not Everything Is Just A Model, But Some Things Are
I argue that the answers surely have to be "no" and "yes" respectively. The mouse theory wouldn't lose its predictive power once vindicated : one could use it again easily enough in similar situations. This is much like the theory of evolution - the very basic model is known with certainty to be correct, since speciation has been observed to actually occur. It is both a model of how things occur and it is also the truth, even if all the nitty-gritty details aren't fully understood. That the world is round is similarly both objectively true but can still be used as a predictive tool. Within the ever-present assumption of reality as objective and measurable, these things
have to be regarded as true.
|
Although, on the other hand... |
But while the theory of evolution and the spherical nature of the Earth are equally certain, they do not possess equal predictive power. The shape of the Earth is a fact which can be measured to arbitrary precision, hence it can be used as a tool to give predictions of arbitrary accuracy. Evolution, on the other hand, has an unknown number of variables, so its predictive power is far less.
Here the ambiguity of the word "theory" is felt with full force.
Most theories are not known with certainty to be true - the evidence for them is
a matter of degree. General relativity is an excellent example of a theory that's tantamount to truth, but couldn't actually be called "proven". The evidence that the model works is overwhelming, but it's also known to have serious flaws and few people regard it as a complete description of space-time. It's possible that its most basic tenet of gravity as the result of curved space is indeed true, even if its detailed description of the curvature is wrong or incomplete. We just don't know yet.
Which also illustrates another level of ambiguity. We know for certain that gravity exists, even if we don't know precisely what it is. We also know for certain that it's related to mass and causes things to accelerate towards each other - so we can be certain,
absolutely certain, that when a cat knocks things off a table they'll fall down. So gravity is both fact and theory in a way that evolution and the spherical world are not; we are certain of some aspects of gravity, but (arguably) highly uncertain about some very fundamental aspects of it*.
* Or at least we were 100 years ago, when general relativity was not widely accepted.
Conclusions
Yes, you can prove a theory - provided you're careful. You must remember that you're operating within the domain of scientific knowledge and that you're assuming reality is objective and measurable. If you doubt this you can doubt everything and potentially learn nothing. This is not necessarily wrong, it's simply
beyond the remit of science. So you can't invoke the "reality is illusory" explanation to cast doubt on anything if you want to remain scientific. Some people would argue that only knowledge obtained scientifically is knowledge of any kind; I am not one of those people. I simply say that there is stuff which is scientific knowledge, and stuff which isn't. Whether the latter is any good or not is another story.
You must also be careful about your definitions. I have chosen "theory" to mean a model which gives predictive power. You could fairly argue that when something is an objective, measurable fact (even when it can be used to make predictions), it is fundamentally different from a theory. That's perfectly reasonable, but not the definition I'm using here. If you want to explicitly differentiate between these observationally proven models and uncertain theories, go ahead, but adequate terms are hard to find in English.
Hard facts certainly do exist, within the framework of scientific knowledge. By extension, so too do irrefutable models. At the other end of the scale there are statements which are utterly false and models which are simply wrong. But most things are infinitely more ambiguous : we know evolution happens but not all details of the process are understood; we have a damn fine theory of gravity but know it's flawed; different theories can have different levels of predictive power even though they've been proven to be true. There are shades of grey with only occasional streaks of black and white.
Science is built on facts but it does not deal exclusively in facts; the usual ambiguities do not make well-established theories untrustworthy or useless. Most theories are not true. Rather, view them as decision-making tools. The best theory typically offers you the best possible decision you can make given all the available knowledge at the time - it may not be correct, but it is the absolute best decision you can make even so.
Ultimately though, science is about the search for truth. Not just getting better and better models, but actually learning how the Universe really works : those rare streaks of black and white are its principle goal. At very simple levels this is easy and has already been done many times - what were once controversial models are now accepted, irrefutable
tools. As we progress our models get more sophisticated and harder to verify, and really big breakthroughs may take longer and longer. We may have to content ourselves with smaller steps, but we can still prove and disprove important aspects of different theories. Still, I for one am not interested in
perpetual uncertainty. For me, the whole point is that one day - most likely long after I'm gone - someone will eventually be able to say, "Ah-hah ! Now I finally understand what the hell is going on."
|
"That'll do pig. That'll do." would also be acceptable. |