Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Tuesday 4 December 2018

You Can't Tell Me What Not To Think (II)


Last time I looked at the backfire and Streisand effects : when attempts to refute or suppress an idea go badly wrong, convincing people it's actually true and/or attracting unwanted attention. We saw that neither of these are guaranteed to happen during any argument, and that the backfire effect in particular is likely mostly due to bad rhetoric. The Streisand effect depends on curiosity and the extent of the information which leaks out, and that no-platforming policies may actually be successful if the audience already knows what the speaker would have said.

We also saw that the effects of arguments are not limited to strength of belief. They can also cause us to like and trust a source more or less, depending on our predisposition towards their argument. Feedback loops even between a couple of individuals can then become extremely complicated, with the persuasive effect of an argument causing one side to like the other more, which in turn causes them to like their argument more, and so on. Add to that that people can hold many different ideas and rarely see eye to eye on everything, and it's clear that at best, all we can suggest here are broad guidelines. There's no programming manual for people.

Which is not to say that these guidelines aren't useful or important. In this second part, I want to summarise what's known about how persuasion works on a one-to-one basis : how and why specific techniques work at the level of a single issue. Specifically I'll concentrate on the different factors at work when people form conclusions, which, partly due to psychology and partly due to the fundamental nature of meaning*, they simply cannot do based on the evidence alone. The one major aspect I'm going to deliberately gloss over, which I've already hinted at, is information flow rate. That ties in with the spread of ideas in groups, which I'll largely limit to the the third, concluding part of the trilogy.

* I wrote that bit while drinking whisky and feeling extra pretentious.


How do people form conclusions ?

... and what is a conclusion, anyway ?

The question, "what is knowledge ?" has plagued philosophers for millennia, with the best answer generally reckoned to be something along the lines of, "mumble mumble mumble justified true belief mumble mumble things that you clearly and distinctly perceive mumble mumble mumble".

I've decided to skip one of the biggest topics in human existence. Fortunately it's incidental here anyway. All we need is to look at what it is that convinces people something is true, to which psychology has provided some more definitive answers.

Of course, even deciding what we mean by "what people accept is true" is not straightforward either. There are differences between knowledge, belief, and behaviour. For example on a roller-coaster you may know, intellectually, that you're perfectly safe, but you may still scream like a little girl as though you're in serious peril : you don't necessarily have any choice about what you believe. Virtual reality simulators can make us jump and duck when things appear to be about to hit us, even though we intellectually know we're perfectly safe. Even self-knowledge isn't perfect, with well-documented examples of people being deeply mistaken about their own ostensibly very important, powerful emotions, as well as their own self-knowledge of morality and their own sexuality. Habitual behaviour provides another case of doing things we may not think are really necessary. And the less said about goals, the better : suffice to say that merely knowing something doesn't mean we care about it. Finally there's the enormous topic of intelligence : that knowing a fact doesn't mean we understand or accept it (which I shall largely gloss over here). The point is that the relationship between knowledge and behaviour is horrendously complicated.
However some behaviour remains inexplicable.
Even when restricted to conscious thought we run into difficulties. When we learn new information, our brains don't automatically say, "right, let's check if that fits with absolutely everything I already know or if I'm gonna have to chuck out some conclusions so I don't get a terrible migraine". This makes it possible for conflicting ideas to live alongside one another. So, if someone believes one thing, that does not mean they can't believe something which is in apparent contradiction. Maybe they shouldn't, but they do anyway.

In some ways this is reassuring, since implicit bias studies have found that people tend to favour white males over other demographics, even if they're not a member of that group. You might conclude that means they're all secretly deluded, self-hating bigots. Thankfully the brain's ability to graciously host completely contradictory ideas means that that's not necessarily the case, even though people can very well act as though they indeed are self-hating bigots. Behaviour, then, doesn't necessarily tell you what people really think. It's far more complicated than that.

Doctor Who may have taken it a tad literally, but humans really are paradox machines.
Attempting to determine if there are multiple levels of beliefs and knowledge and how these relate to behaviour is definitely a matter for professional psychologists and philosophers. Still, I think something useful can be useful said about the common sense understanding of the phrase "something people sincerely believe is true".

Some researchers have proposed five different criteria by which we assess proposed conclusions (which in turn leads to a wide variety of different methods of evaluating information) : our trust in the source, how many other trusted people we know that believe it, whether we think the evidence supports that interpretation, whether we think the argument is coherent, and how it fits in with our existing knowledge and beliefs. That puts some interesting flesh on the bones of the statement last time that people disbelieve information if it contradicts too much of what they already know, which we'll explore more fully in part three. The point here is that while reality is objective, our interpretation of it is often anything but; that at the most fundamental level we are unable to reason without trust and emotional influence. Don't underestimate the difficulty even of ascertaining how someone reached a conclusion, much less if their conclusion is correct or not.

For now, although you can find umpteen articles about this all over the web, here's my own take on the matter.


Comparisons to what we already know

We all of us have access to internal and external memory. We are capable, under the right conditions, of comparing our observations with this vast database of our own and other people's findings. But though our long-term memory does come into play, this is a demanding task, and our default comparisons are much simpler : they are relativerecent, and local.

This simple statement has remarkable explanatory power. For example, by rights we ought to be constantly thrilled by our access to clean water, electricity and other modern conveniences our ancestors lacked. But we aren't, because although such things are seen as wondrous for a little while, they quickly become normal and accepted, so we only appreciate them when they're taken away.

At least according to Terry Pratchett's Hogfather anyway.
We're also tremendously bad at estimating our own happiness and the state of those around us, and what the long-term effects of major events will be on our own state. More subtly, when we're aware of a problem, we start to actively seek it out. When it diminishes, previously minor issues become as important as the major ones used to be. Things that were not previously threatening can then become so. This also explains why why people in privileged positions become jerks : for them, a minor difficulty is genuinely perceived as being much worse than for someone used to it. And it helps explain why we sometimes excuse our bad behaviour by our recent good actions, and, somewhat more unexpectedly, why we put shitty things in time capsules*.

* Some even have it that this can explain the lurch to the right the Western world is currently experiencing. I disagree that this is primarily due to rapid technological development, though I'm more sympathetic to the idea that some other perceived rapid change - social, political - has at least an indirect but significant contribution.

Problems are bit like fractals : once you zoom in, details that were once small and insignificant become just as important as larger features. And our bias to the recent means we don't appreciate improvements - even massive ones - as much as we should.

This means that if we debate with someone, they'll primarily evaluate our statements relative to their own recent* experiences. We learn mainly (some argue entirely) by induction, and only with effort and/or training do we sit down and actively attempt a more objective, impartial analysis. Part of the reason why arguing with people on the internet is so hard is that we rarely know much about their relative, recent, local experiences. Exotic and normal are both relative states, and arguments that may be fabulously successful in persuading some can massively backfire with others.

* Whether "recent" should be taken literally or not is unclear. It could be rather that we compare current events to the last few similar experiences, regardless of when they occurred - a soft of "numerical recency", if you like.

That's our default behaviour., but again, sometimes we do take the time to analyse things more carefully and impartially. There doesn't seem to be much research as to the conditions under which this happens or what makes us most rational - rather the standard approach seems to be to assume we behave logically by default and see what goes wrong. But equally, it's worth bearing in mind that we do need evidence to form a conclusion. Even though the broader context - ideology, existing beliefs, trust in the source, etc. - matters a great deal, context still has to operate on evidence. It doesn't always mean the observation is particularly astute or the reasoning very good, but reality usually has some role to play - even if only a perverse one. And that's well worth remembering. Most of the time our brain does a job which is "good enough". If it didn't, we'd all be dead.

Natural selection isn't foolproof, mind you.


Ideology and emotion : winning hearts and changing minds

Ideology may be loosely defined as the way we think the world should work. One notion I rather like is that there are different moral elements which we all value to different degrees. Such "moral sliders" might be independent of (though not necessarily uncorrelated with) intelligence, giving different basic ideological systems. According to this, liberals most value care and fairness, whereas conservatives prefer loyalty, respect, and purity (though most people will value all five of these to some degree). We may be hard-wired with initial settings for these, or they may be set down early in life, and it's unclear how far (or how easily) these values can be adjusted later on.

However we arrive at them, moral values provide a very fast way in which we can evaluate a statement, and changing someone's ideology is difficult at best. If we want someone to believe something, it's better to try and frame it to be compatible with their ideology instead - personal examples can be found here*. Unless you've been living under a stone on Mars your whole life, I bet you've witnessed arguments rapidly degenerating because someone insists that the other side shouldn't merely stop liking something but actually hate it instead : they try and shame people for their beliefs. Which makes their moral sliders fight back hard, causing a spectacular backfire (it seems the brain registers attacks on core ideas using the same it has for dealing with physical threats). Hate and anger and shame are all very powerful forces, useful for rallying the troops and inspiring action, but you can't just demand people hate whatever target you choose. If you actually want to get people on your side (and of course sometimes people just vent because ranting feels good) and get them to hate what you hate, you've got to go about it with much more subtlety. As we saw last time, changing stance and changing strength of belief are different. More on that later.

* Couple of shorter recent examples : I thought this video on the obesity crisis was interesting, as was this article on sugar, despite contradicting my existing opinions.

And I suppose "bass boost" is equivalent to how loudly
someone shouts while "virtualiser" is how many news
channels they appear on...
Sometimes, as on a roller-coaster, the difference between knowledge and belief can be extremely enjoyable. The contradiction between reality and expectation can often be extremely interesting, or, if we have no (perceived, conscious, ideological) expectation, we can accept new findings without much dispute. On the other hand perceived ideological conflicts with reality seem to be almost always unpleasant, usually provoking us to rationalise away new information rather than re-evaluate our core beliefs. The leading theory seem to be that this is because we have an emotional investment in these ideas, such that they've become part of our identity. And as we noted last time, social praise can increase our belief in an idea.

Emotional manipulation can have sinister uses : if you keep people permanently angry, afraid or otherwise highly emotional, it becomes impossible for them to think straight. That strips away their rationality leaving only their emotion-driven ideology to deal with. And of course you can manipulate emotions physically as well as through information : you try being rational if you're exhausted or massively aroused, for example. Or indeed suffering from lead exposure. Nutrition has also been suggested as a key aspect of human cognitive development.

If you can't convince them, confuse them. Or terrify them. Better yet, do both.
What I particularly like about this basic concept of moral beliefs is that although they may be linked with intelligence (or not, doesn't really matter) they act completely independently of it. So in an argument, you're effectively fighting different, sometimes opposing forces : getting someone to accept something intellectually is a different matter from gaining their ideological approval - and they don't even have any choice about the latter. Perhaps that's part of the difference between knowledge and belief. If I tell you, "don't worry about the sharks, I can easily save you !" and you can clearly see I'll easily be able to rescue you long before the sharks arrive... well, that knowledge is clearly going to do you little good against the belief in the approaching dorsal fins and their associated big nasty teeth.


Trust

In an ideal world we might evaluate statements based purely on their content. In reality this would be Bloody Stupid, because liars would quickly overwhelm the (literally) terminally honest (I strongly recommend playing with the interactive simulations in that link). More on the group dynamics of this next time, but here the relevant message is that we evaluate statements based on who's saying it and in what context. For all its limitations, this gives us a much faster, easier way to assess truth than carefully working through the details of an argument ourselves. It's especially important when dealing with conclusions from highly specialised areas of expertise.

Additionally, people don't always say exactly what they mean, so context really does matter : a drunk bloke in a pub doesn't carry the same weight as a book by an Oxford professor, even when the bloke is the author of the book. And trust is a function not just of who says something and how they say it, but also what they say : I trust my dentist's knowledge of teeth but not necessarily of archaeology. Even the physical nature of the communication medium may be important, possibly due to cultural conditioning (i.e. "books on dentistry are reliable, drunkards rambling about Mesoamerican architecture are not") but maybe even due to the very nature of sensory perception.


The point is that who makes the argument may matter as much, or perhaps more, than what they say. And the arguments they make can change our perception of them in a complex feedback loop. For instance, we know that people we view as competent are seen as more likeable when they make a mistake, but if we already view them as incompetent, mistakes cause us to like them less (there is presumably a limit to this, beyond which we start to re-evaluate our more fundamental perception of them).

Establishing trust is tricky. You've got to establish your intellectual credentials, and often this can be remarkably easy (e.g. be an article on a major website, don't be a commentator on said website). In such cases trust tends to be the default option - sheer social prominence seems to engender an extraordinary amount of credibility; witness celebrity actors or even just attractive people who gain legions of followers because boobs, apparently. Less prominent individuals have to work harder. More important than credentials and cleverness may be to demonstrate goodwill, that you're trying to do right by everyone. Expertise can be limited, but goodwill is seen as an inherent property that you'll always have.


Expectation

The impact of culture is hardly fully understood. But what seems clear is that the data doesn't speak for itself, and some things have to be taught. Evidence is crucial, and some interpretations seem to be natural and universal... but others don't. That's another enormous topic I can safely skip. For our purposes it simply means that people evaluate things based on their own memory using the methods they have been taught, which will differ at an individual level as well as between whole societies. Even merely saying, "focus on this", can have profound effects. The gorilla test is probably the most well-known example, but it's also worth remembering that without being prompted almost everyone passes easily.




Behaviour

While we might usually try and change behaviour by changing opinions, changing behaviour can change thinking. People in positions of authority can, for example, create rules for behaviour that lead to people rationalising (consciously or otherwise) why they do it. We might even start to impose rules on others without understanding their purpose. What I find really interesting is the idea that more intelligent people can be better at this self-justification, whereas it's more critical people who actually stop to question things. Analytical intelligence is, I suggest, the ability to process information to form a conclusion, whereas criticality is the ability to overcome bias. They might very well be linked, but they aren't the same thing.

Obviously this method of dictating the rules to change belief isn't available to everyone, but then, neither are any of the others. Our default - and I'm as guilty of this as anyone else - is to assume that in any argument we'll be able to change other people's minds straight away with good evidence. But for strangers on the internet, to take a simple example, we have no idea who the other guy is or what their agenda might be. We assume that our own conclusions are completely rational and evidenced-based (or equivalently that our own ideology is just obviously the right one) but, somewhat paradoxically, we think of the other person as equally logical but not necessarily trustworthy and perhaps very stupid. Brains are weird.


Cognitive ease

The final aspect of how we form conclusions I want to mention is cognitive ease, probably the biggest key to persuasion of all. If something fits with our existing beliefs and ideologies, is framed in such a way that it fits with our world view, methodologies, is expressed in a simple way, and comes from a trusted source, then it's easier for us to believe because it's easier for us to process. We're also less likely to examine things we agree with as carefully as those we don't (confirmation bias). Conversely things which go against some or all of these conditions force the brain to work harder. That can make things harder to accept, but it can also make you better at rational analysis. There really is something in this old comic after all :


Though it should be remembered that this is a guide to how people think, not what's true. Sometimes things are simple and right or complex but wrong. And what can seem simple and even unavoidable to one person can seem complex and abhorrent to others, and vice-versa. The reason we make comparisons based on relative recent experiences is because this is much easier for our brain than if it had to sift through our entire memory, carefully making pro/con tables and summing everything up quantitatively. It may be a fast way to reach a conclusion, but it's not exactly the most careful or exact method.

We've already looked implicitly at some ways an idea will have cognitive ease, but there are some which are less obvious. Sensory input seems to matter, e.g. whether a font is easily readable, whether someone's name is easy to pronounce. So does mood - being happy makes things easier to accept. To a point, anyway.


Rhetoricians have been exploiting this for millennia, using statements designed to appeal to vanity and ego, making their audience feel special and individual. A skilled orator has an abundance of parameters to exploit that simply aren't available in plain text : pace, tone and pitch of voice, physical gestures. Their emotions are (or at least so we like to tell ourselves) easier for us to sense, so we think we get a better indication of whether they're being truthful. Of course this also gives a skilled liar more tools to work with. Here's an example of something that's obviously word salad in text form that (apparently, though God knows how) wasn't so clear to the audience when spoken :


How the holy flying crap that persuaded anyone I'll never know. A much more common approach is to tell a story. This engages us personally as well as being incremental*, so that each new piece of information is small and builds on the first in a consistent development : each successive advance is cognitively easy. Perhaps one of the most skilled approaches of all is to get people to think they're working things out for themselves - they can hardly believe themselves to be inconsistent (even if they actually are), and they will tend to frame things within their own ideologies anyway - whilst actually manipulating them. Socrates was famously bad at this. Others are more successful.

* I'm not sure if the Google Plus link will be preserved, but the story is nice : "Apparently a few years back Ebay changed their page background from white to yellow. Everyone hated it, huge outcry, so they changed it back to white (don't do this). Then a clever engineer had an idea. They made it so that gradually, over the course of several months, the page background changed from white to yellow. Did the users notice? Of course they didn't!"


The tricky thing about cognitive ease is that if you want to get people to think, you have to get them to deal with uncertainty, whereas if you want to persuade them, you shouldn't leave a narrative gap. The problem is that people often prefer an incorrect to an incomplete explanation. Yet it's usually easier to argue towards a state of uncertainty than going straight for the jugular. Sometimes you can say, "well, actually there were several different factors at work as to why your daffodils exploded". This doesn't leave a gap but does allow some uncertainty about which reason was most important (was it the gunpowder or was it the... oh, fine, bad example). After all, there are gaps and then there are G  A  P  S. Surely the worst case would be to say, "we have no idea what it is, but we're certain your explanation is wrong".

One technique I particularly like is to ask someone to think of something that could undermine their belief by themselves. It's a very direct attempt to promote critical thinking rather than trying to simply win people over.

More difficult are the cases where someone's interpretation is just plain wrong, e.g. the exact opposite of what the data indicates. It's not possible to make everything equally simple, but I suggest that if someone's interpretative methodology is flawed, explain to them why using unrelated examples that they don't care so much about. For preference, don't do it there and then. Bring it up some other time, otherwise they might twig as to what you're doing and you might only make them think that your method must be wrong.

Time itself is a big factor too : things which are cognitively challenging take more time to analyse by definition. Changing stance seems to be almost inherently cognitively challenging, so give people time to mull things over. Don't expect to win them over on a difficult topic there and then - much more on this in part three.

And be sure to mention the original interpretation as little as possible : studies show that if you go about explicitly debunking something, people can remember the the explanation you wanted to disprove more than the correction. If you have to mention it, start and end with the correct explanation, mentioning the erroneous interpretation only briefly in the middle. Rebuttal articles (the kind that only say, "this isn't true" rather than trying to find out what's actually going on) can be useful, but only if people are neutral or already dislike whatever's being refuted : they enhance existing belief, but are not necessarily persuasive. As we saw with ideology, you can't just tell people to be angry about something : you have to strip away their approval of an issue before they can start to attack it for themselves.

Finally, one important aspect of cognitive ease is simple repetition. We may have a bias towards information we learn first, since it should be easier to implant a brand new idea than change an existing one. After that though, if you repeat a lie often enough, they say, it becomes accepted as truth and you may even start to believe it yourself. But it isn't just lies. A more accurate (though less appealing) statement would be that the brain has an easier time processing familiar information and is therefore more likely to accept it as true. Extra emphasis on more likely. It's not guaranteed. If it wasn't already obvious, it really should be now : all of these remarks are only guidelines. The brain isn't a programmable computer, but it's not as mad as a million monkeys on a million typewriters either. With notable exceptions, of course.


Repetition is complicated. While it's not necessarily a good idea for individual people to argue their case based on many different reasons (people become suspicious and wise up to the fact you're trying to persuade them), I'm not aware of what happens if you get different reasons for the same conclusion coming from different people. My suspicion is that this would be very persuasive indeed, but we'll look at the effects of group dynamics next time.



Summary

Last time we looked at some extreme hypothetical cases. But most of the time it's not these extremes we're after. Yes, we might go on a rant about Edward Gibbon's crappy writing occasionally. And yes, sometimes we might want (even need) to do the verbal equivalent of hurling stones at an angry cat.

Usually though, we do actually intend to change someone's mind in a significant way, not by the trivial case of basically agreeing with what they already think or getting them to believe something even more passionately, but by fundamentally altering their stance on an issue. That means that at some point they will have to become uncertain in order to change their opinion. According to all the links cited here, the way to increase the chance of success is to minimise the difficulty and maximise the pleasure that this presents. It's essential to remember that all of these techniques will only ever increase the probability of success, and they've got to be adapted to the audience :
  • Try not to argue with people who don't like you. Accept that you won't be able to make any headway and might make things worse. Don't fight battles you can't win, work with what you've got. This doesn't mean giving up, but it does mean accepting limitations to what you can do. Think of it this way : if you try and argue with some people, you'll only make things worse. So don't do it !
  • Frame the argument to agree as much as possible with existing beliefs and especially ideology. Make it seem not so different from what they already support, ideally an incremental change. People are perfectly capable of believing contradictory ideas at least as long as they don't notice the conflict, and even then they won't instantly choose one position over another. Try and work with their morality - don't try and change their ideology directly ! And if they are after hard evidence... give it to them.
  • Don't present the argument as an argument, try to avoid starting a debate. Don't make them argue the position because they'll feel defensive even if you don't say anything personal. That doesn't mean you may as well hurl abusive insults at them though ! And recognise that their personal experiences will be different from yours and may have biased them in different ways.
  • Present your case in as simple terms as possible. Repeat your main point (but not incessantly) and stay focused. Try to use a single main argument rather than lots of lessor reasons. If the point is complex, then engage the senses, use pictures, tell a story, and employ other tools of extended cognition. Let the brain do as little work as possible. It'll have to do some, just try to minimise it. In contrast, don't confuse people by using these more complex methods to describe something where a simple factual statement would do.
  • Be as informed as possible on the subject concerned - ideally, an expert with a recognised qualification. You don't have to shove this in their face instantly, but you can slip it in more gently : "Well, actually, I wrote a paper on....".
  • Recognise that you might be wrong, but don't admit to definitely being wrong. Acknowledge your own uncertainty (even if you don't have any, although to be honest if that's the case then you've probably already lost). Especially as a non-expert this will then put you on a more level playing field and you won't be seen as unjustly trying to claim superior knowledge of something that, for both of you, may be really just a hobby.
  • Understand the alternatives before discussing them : cultivate meta-knowledge. This allows you to have a stab at understanding the appeal of the alternatives. And if you don't know about the alternatives beforehand, your own position may be a lot more down to bias than you you'd like to think.
  • Try to minimise the damage that your argument would entail. It's much easier to believe oneself to be misinformed or acting out of a positive intention than to say, "actually, I was being a bit of dick". That conclusion is possible but the result of a longer process. Give people legitimate reasons to excuse their belief, i.e. they were misinformed and/or well-intentioned. Don't present them as the villain.
  • Minimise the amount of gaps. If at all possible, present an apparently complete (though perhaps somewhat uncertain) alternative. Rather than saying, "we have no idea", say instead, "it could be this or that, but we're not sure which". Try to mention the thing you're trying to refute as little as possible.
  • Give people time to come around. If you're changing their stance on a complex issue, they're going to need time to think things through, possibly re-evaluating a lot of supporting arguments as well. Yours are not the only arguments they're experiencing.
As these are only guidelines, it's also worth mentioning that sometimes using extraordinary methods can have truly extraordinary results. I've kept this list and the discussion focused on more everyday situations, as most of us don't have the capabilities to induce the intense levels of emotional manipulation (or medical treatments) such methods require. Just keep in mind that it doesn't always have to be softly-softly : sometimes, a hammer blow of emotion delivered at just the right moment can have astonishing results. It's just damn hard to do this by writing someone a short text message over the internet.

But the main message has been hitherto implicit : the brain doesn't have a built-in, objective truth analysis package. Such a thing may even be fundamentally, philosophically impossible. The closest it gets to direct, true objectivity is through direct observation, and even that's not perfect. Most of the time it seems to judge based on consistency. If all available factors agree - ideology, assessment of self-consistency, prior knowledge, direct observation, other trusted people - then belief is very strong. It cannot rely exclusively on evidence because evidence without trust is not evidence at all. It has to use those other factors. We should be less surprised that people come up with stupid ideas and far more surprised that they're able to come up with any ideas in the first place. Or, as a wise man once said :

Or at least he said something very similar. Doesn't really matter who said it anyway.
Which means that if you think someone has come to conclusion based on less than rational methods of analysis, don't even bother trying to torture them with logic. Work with those other factors instead. It sounds ridiculous, but maybe the thing to do is think of some completely irrational arguments and try those instead. After all, if stupid people can't understand their own stupidity, then the same should follow for irrational people as well.

People and ideas are bloody complex. Even the most self-aware people don't stop and consider their own biases when they decide what brand of toothpaste to buy, and even the stupidest can sometimes realise when they're being particularly dim. There can be a complex interplay between what someone says and how you perceive them : if you already hate them for other reasons, you may start to hate what they say, and if you already hate an idea, you may instantly dislike someone new if you learn they agree with it. At the extreme, you may come to view someone's support of an idea as evidence of their bias and never be affected by what they have to say at all. This isn't intrinsically a bad thing, it's just how people seem to work. And people who are highly intelligent are not necessarily very critical : if they're in the full grip of an idea, they may simply be better at rationalising it rather than really examining it.

Although of course "believe" and "think" can also be used interchangeably. I'm gonna let you have some cognitive challenges by not telling which sense of the word I'm using.

So the backfire effect is at best only partially related to the actual facts. Yes, people may be predisposed towards disliking certain facts because of their existing beliefs. But this doesn't mean that stating those facts is definitely guaranteed to cause them to cling more strongly to their ideals. Rather this seems to depend far less on what the issues are than on how they're framed and who is stating them. Avoiding the backfire effect and using persuasive rhetoric are almost interchangeable terms, saving that the neutral option of having no effect whatsoever is also possible. Rather than the issues themselves being the problem, it's articles that use pure rhetoric, pure chest-thumping "us against them" invective diatribe that cause the worst backfire.

Remember how last time I was saying that one study found no evidence of a backfire at all ? Well, there's a nice comparison with an earlier study which did. For example the earlier study described a statement which was designed, but failed, to correct false beliefs about WMDs in Iraq (I'd prefer to keep this apolitical, but this is a an actual psychological study so blame them) :
Immediately before the U.S. invasion, Iraq had an active weapons of mass destruction program, the ability to produce these weapons, and large stockpiles of WMD, but Saddam Hussein was able to hide or destroy these weapons right before U.S. forces arrived.
And here's a version which didn't produce a backfire effect :
Following the US invasion of Iraq in 2003, US forces did not find weapons of mass destruction.
Look how different they are ! The first might not exactly be War & Peace, but it's a pretty darn complex narrative to cram into such a small space. It doesn't exactly mention the original idea, but it comes close : it keeps talking about the existence of the weapons. It's incomplete and potentially uncertain, "hide or destroy", giving multiple possible explanations when, as we know, people prefer to have only one. These aren't necessarily at odds if you stop to think about it, but on a quick reading it suggests that it isn't known what really happened. And doesn't it smell awfully convenient that they were destroyed "right before" the invasion ? That alone sounds rather unlikely, phrased such as it is. Event though it doesn't insult anyone, the statement is wide open to misinterpretation.

In contrast, the second statement is a blunt fact. It's not that it denies there may be uncertainty. It's that it doesn't even open the mental doorways to uncertainties in the first place. There's no complexity about what happened : either the Americans did or did not find weapons, that's what we're discussing, not even why they weren't found but simply the fact that they weren't found at all. It is absolutely focused. There's no complexity. No uncertainty. No scope for argument of any kind. It's just too simple to be misunderstood.

Finally, we've seen that belief seems to follow a threshold-like behaviour. If someone does something stupid, your perception of them alters depending on whether you already view them as competent or not. If you trust someone, you view their behaviour and weight their ideas very differently than if you don't. If you already accept an idea, emotive rhetoric may strengthen your belief, whereas if you don't, it might backfire - stance and strength of belief are different. Likewise the techniques used have to be done at the right level - you can't just keep repetitively and tautologously repeating and repeating the same thing endlessly again and again, over and over, day in day out, or write everything in blue boldface comic sans to make it easier to understand. And as we saw last time, people like to be challenged, but not so much it makes them feel that the effort is just not worth it. The old golden rule of presentations is absolutely correct : know your audience. What works on one person is not in the least bit guaranteed to work with others.

If you want a more detailed guide to persuasion, I highly recommend the video below.

Personally I find Lindy's style of rhetoric very compelling for educational
videos, though I can't for the life of me see this working for political statesmen.
And while I agree there should be more lessons on rhetoric in schools, I
can't agree that there isn't any at all.


You might come away from this thinking, "Great ! I now know all I need to become a master of rhetoric. I shall bend the world to my will and remake it for the better". Well, hopefully not because you should now bloody well be aware that knowledge and behaviour aren't the same, if you weren't before. And in fact very little (if anything at all) of this is new. Still, if we've known all this for so long, why do people seem so damn stubborn about clinging on to obviously wrong ideas, yet so susceptible to change for the worse ? Why hasn't some wise, charismatic, persuasive person managed to put a stop to this nonsense ? Well, what this discussion is lacking is other people, a factor so large it can - sometimes - render all the others useless, thus making this post a complete waste of time.

So in the third, concluding part of this mini-series, I'll examine how information flows affect belief in a group context. Can we really stop bad ideas from spreading, and if so, how ? Should we try and restrict speech or will this inevitably backfire ?

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.