Thus far we've concentrated on how individual people respond to arguments, but even then it's been apparent that ignoring the fact people are members of groups is a bit like ignoring that they tend to have legs. We are social animals, an intimate part of an intricate flowing web of information; wheels within wheels of mood, ideology, praise, goals, curiosity and bloody-mindedness. Except for hermits, that is. Hermits all suck.
So in this final post I want to concentrate on how ideas spread throughout groups and the different ways of stopping them. In part one we looked at some extreme examples of arguments that succeed and arguments that backfire; cases of controlling information that succeed and cases that don't. In part two I attempted to generalise the conditions under which successful persuasion is more likely, examining conditions for cognitive ease that aids acceptance but reduces thinking. In this concluding post, I'll try and generalise the worst possible group conditions for the spread of ideas. All of these are guidelines, of course : we don't necessarily have to fulfil every condition, but we should try to get as many as possible.
Before we get on to groups, it's worth remembering that people are individuals. So let's very quickly recap how those individual elements of our group are going to behave on their own.
EDIT : If you're stumbled here wondering about misinformation in the context of the current COVID-19 pandemic, you could do worse than check out this overview article. I always find it a bit suspicious when people ask me to share links, as though I have any kind of sizeable audience whatsoever, but it seems like a decent piece to me so I figure there's no harm in it.
An insanely brief guide to people
When evaluating new information, by default we analyse it in the simplest possible way. It's not that we can't do more complex operations, just that that's not our default. So we tend to compare things relative to our own recent, local experiences. We find it easier to accept things which alter our knowledge in small ways, preferably building on what we know rather than falsifying it, and avoiding doubt. Our emotional and physical states influence our analytical skills, hence they affect our judgements. We don't just judge the raw information presented to us, but also who's saying it and how - both in terms of what medium they use and the emotions they're seeking to invoke within us. Trust in the source and acceptance of the idea are not the same, but they are closely interconnected.
And we're creatures of thresholds. Increased difficulty of a task can be a motivation to continue for a greater reward (if only the self-satisfaction of solving a more difficult puzzle), but only to a certain point. Emotive rhetoric affects us differently depending on our current stance on an issue : if we hate it, but the argument says, "it's better than fornication with an elite team of cheerleaders", then we'll probably hate it even more; if we like it, the same argument may cause us to like it more. Similarly, our perception of information sources changes in a different way depending both on what they're saying and how we perceive them. To make matters worse, whether we approve or disapprove of an idea affects what sources we seek out : more extreme views drive a preference for more extreme sources. Polarisation can become self-driving. Isn't that just marvellous ?
It's not just about external knowledge either. Even our self-knowledge isn't that impressive - contrary to popular opinion, we can even be wrong about whether we really enjoy or dislike something. Which makes it less surprising to learn that we're not great at estimating other people either.
Finally, what we know doesn't necessarily govern what we actually do. Unlike a computer, the brain can host contradictory ideas without melting into a pile of goo; habitual/addictive behaviour needs more than intellectual knowledge to change it; our respect for knowledge is complex to say the least; and how we adjust our goals based on new information is even worse.
All this is already enough to demonstrate that explaining how ideas spread is fantastically complex, and that's without looking at the all the supporting effects : personality types (which as well as our behaviour also at least partly governs our predisposition towards accepting something) and intelligence are strongly influenced by genetics.
So it might well seem that the complexity level is too high to understand the overall picture, and perhaps it is. Maybe we'll never have a complete understanding of the human condition. But maybe if we start with a simpler approach we can still learn something useful. That's the tried-and-tested scientific approach, after all.
Group dynamics
Consider the spider plant. What a happy little fellow he is.
Spider plants can reproduce in two ways. They can flower and produce seeds, and they can also grow cute little plantlets on the end of their leaves which can take root and become their own brand new, independent plant. This is roughly analogous to ideas. If conditions are right, people can have new ideas by studying the world around them entirely by themselves, like seeds spontaneously germinating in a barren field. I'm not going to tackle that aspect much today. But they can also spread (by different mechanisms) via other people, and if we want to determine whether the flow of information can be controlled, we'll definitely need to look at that.
We've already seen how ideas shared by a group can become part of identity : being a member of a group makes you more likely to believe similar ideas, to defend them and attack the alternative ideas of other groups. But we also know that people do change their minds and ideas go viral, yet at the same time they often seem incredibly stubborn and pig-headed. Irritatingly, people seem to have a perplexing tendency to be incredibly vulnerable to believing utter crap and yet are pretty much immune to anything sensible, suggesting that if there's a key trait that defines humanity, it's the ability to be really annoying.
... or to put it another way, this suggests that ideas spread easily but only when conditions are right. Whenever trying to persuade someone, remember that as well as all the individual effects we've discussed, you're also up against the complex relations acting within and between groups. It's not just what you say or even how you say it : it's who you are and who they are as well.
This is an area that's far from fully understood, though progress is being made. One of the simplest findings is that if your friends believe something, you're more likely to believe it too : not just because you're likely to share common interests, but because you're more able to persuade them. This means that ideas can quickly spread within groups. The term "going viral" is often used when ideas spread quickly, but it's not always accurate : most ideas spread as complex contagions. At this point I invite you to try this truly excellent online simulation which explores this in some depth. The diagrams below are taken from there.
The gist of it is as follows. If ideas truly spread like viruses, they'd quickly overwhelm us. If a single person has an idea, and everyone has a connection (an information channel) which can form a continuous path of communication to everyone else, and ideas inevitably spread along all available channels...
... then as soon as the little red dude starts chanting about, oh, I don't know, the existence of a prophetic singing cactus, then pretty quickly everyone would become convinced.
This is obviously not how it is in real life. We've already looked at how individuals can respond differently to arguments, but there are additional complexities from group membership. A single source of an idea, it turns out, is not much good : we've all got that one friend who believes in magical pixies or whatnot*, but we don't go around being so silly. But if we have multiple people telling us the same thing, we're far more likely to be persuaded. Here's what happens if we require each little simulation dude to have 25% of their friends believe in something before they do :
*... right ?
The network was the same, but it didn't spread as far ! The first dude the idea encounters in the middle group has a whole bunch of friends, most of whom don't believe in this new idea. So the idea spreads very efficiently among the less social people with fewer friends, but it was stalled - not aided, as you might assume - by encountering more social people. And yet, to compound the irony, we can make the idea spread by giving that first dude more connections... the trick is to connect him to more people who believe the idea. Of course, if we connected him to even more people who didn't believe, we'd end up making him even more resistant to new ideas.
So the connections between and within groups make the spread of ideas a complex process, even under apparently simple rules. Connections can both help and hinder propagation. The obvious route to stopping an idea would be to
This value of 25% isn't just plucked out of the air - it's supported by controlled tests and real-world observations. For example, under controlled experimental conditions, researchers found that the backfire effect was strongest when people were given 10-15% of sources that claimed the opposite view from their preference, but above 30% this drops away and adding more sources started to have the intended persuasive effect. And observational studies have found that if 25% of a population believe an idea it can completely replace the status quo. It also fits the extreme cases we discussed in part one and the multiple methods of reasoning we looked at in part two : no-one believes something if it's in absolute contradiction to what they already know. It goes without saying that these values are subject to uncertainty, and maybe even vary from person to person, but that doesn't undermine any of the main points.
Note that the above simulations have done something rather clever. We know from last time, and as summarised above, that individuals respond to ideas in very different ways because of an enormous number of variables. Rather than trying to model all that, what they've done is say that there are two kinds of ideas : simple ideas that appeal to psychological universals, which are very easy to spread with a single connection; and complex ideas that require multiple sources of supporting information. This makes it enormously easier to model how the different ideas spread since the people in the model don't need any properties besides connections to other people.
Combining these models with what we know about persuasion on a one-to-one level, something basic but important is evident. The successful spread of information depends on quantity, quality, source and target. Quality we discussed in-depth last time. Quantity we see here also clearly plays a role : convincing one person at random isn't likely to lead to anything, we need to use mass media (more on that later; "Quantity has a quality all of its own", as a wise man once said); persuading more people at once is not a nice bonus but has a fundamentally different effect. Source also matters because, as we saw, trust is crucial. And the choice of target can make a big difference to how far the idea can spread, depending on their position in their social web.
We might guess, then, that spreading enormous amounts of badly-worded drivel wouldn't do much, since everyone will be able to identify it as drivel for themselves, whereas small amounts of carefully (and individually) targeted information would be more successful at inducing a wider spread. In fact it's more complicated than that. As we saw in part one, there's a threshold for how much garbage can be spread around before people's thinking becomes confused, but more on that later.
So of course, these simple models are not realistic, but they are instructive. To have a truly accurate model would be a much bigger undertaking, and given the sheer complexity of it it could make it difficult to extract useful trends. Consider this simpler approach to be more pedagogical than it is truly explanatory. Less is more, and all that. That said, agent-based modelling techniques are already beginning to tackle much more specific problems than the spread of generic ideas, such as models of how religious tension can escalate. I've even done some very simple code myself, though not the network modelling described here. The models aren't always terribly sophisticated (much less accurate !) but they're still interesting.
It's worth re-emphasising yet again, though, that behaviour does not follow solely from belief. If, ultimately, our goal is to get people to act in a certain way, then their thinking is just one factor in this. And we should further remember that belief doesn't necessarily imply a flow of information either.
With all that in mind, I believe we can make a few reasonable inferences about information flow in groups from what we already know. These can and should be tested with models and observations, but the basics seem sound enough to me. I'm going to consider this mainly from a perspective of stopping the flow of information rather than spreading it.
Ban the ban ?
Back to the dubious plant-based analogies again (hey, at least it's not the "marketplace of ideas" which everyone else uses). Spider plants are weak and feeble, whereas at least some ideas are resilient little buggers. So let's use the much sturdier oak tree. They begin life as small acorns, some of which never germinate but others grow into saplings. A few of these survive and endure for millennia.
Now this isn't going to be an exact analogy. Cognitive ease is not the goal here : direct analogies are boring; inexact analogies are much better tools to encourage thinking. But roughly the metaphor works something like this. The oak trees are ideas : small and unformed at first, and many die off young, but a few succeed to become core parts of individual identities or even dominate whole societies. They spread themselves by seeds (but we must imagine that acorns can spontaneously generate as well, since people can have ideas from other sources besides other people), provided the soil and climate conditions are favourable. Certain things encourage their development (water, nutrients, oxygen, light, temperature) in the right amounts but are harmful in the wrong quantities. And there are natural predators, parasites and diseases than can afflict them, sometimes fatally.
With this metaphor as a rough guide then, what are our options for changing people's minds ? If we absolutely despise oak trees, what's our best option for dealing with them ? Can we actually destroy the leafy bastards or must we be content with managing their spread ?
Change the soil
Oak trees can't grow just anywhere. Too hot or too cold, too little or too much water, and they die; their seeds fail to germinate. Just as we saw with ideas last time, they have thresholds for certain parameters which determine how successful they'll be in surviving. You can't grow them on Mars or in the Pacific ocean, but the Goldilocks zone of the British countryside works well enough.
We can think of the soil here as ideology and methodological techniques (our whole world view if you like), and the climate as the current conditions. So far as I know, it's unclear how much of ideology is down to genetics and how much is due to environment, though we do know that genetics plays a significant role in both intelligence and personality. It's certainly not fixed from birth, otherwise Scandinavia would still be home to marauding Vikings and Ikea wouldn't be much of a thing. The point is that if you have amazing soil and a wonderful climate, i.e. an ideology and supporting evidence that's in obvious agreement with the ideological preferences, ideas will spontaneously germinate. People will see the evidence en masse and reach the same conclusion independently, whether you want them to or not.
In that case, there's really no point at all in trying to manage the existing oak trees themselves. It'd be like a single axe-wielding lumberjack trying to chop down a whole forest : heroic, but absolutely pointless. Banning discussion of obvious ideas that everyone is inclined to come up with naturally (even if they don't necessarily agree with them) achieves nothing. It might even be counter-productive, causing people to see it as evidence of bias instead of evidence against its validity. If, for example, you tried to ban anyone from saying that the Queen is a bad person, it patently wouldn't work. People are going to view the same evidence in very different ways, but (I hope !) it's at least possible to see how different interpretations are possible and unavoidable in some cases.
Changing the soil is, in principle, a fantastic solution, and maybe the only way to stop "obvious" ideas from spreading - essentially you're trying to change what people think is "obvious" in the first place. A thousand years ago it was obviously a jolly good idea to hop on a longboat and burn down a monastery or two; a hundred years ago it was obvious that women shouldn't be able to vote; twenty years ago it was obvious that gay people shouldn't get married. Don't underestimate how difficult it is to escape contemporary culture or how potent the changes can be once you do.
So if you can make people into highly rational, critical, analytical thinkers, then Aumann's Agreement Theorem says they should eventually agree. I think the reality is rather more subtle, but the basis is sound. Just as you can alter soil, so you can give people moral teachings and tools of critical analysis that can influence their tendencies. Maybe not entirely, but you can exert a substantial influence. Just as sufficient alterations to the soil will eventually kill even the mightiest tree, so too will altering's someone's ideology and mental toolkits get them to, in the end, abandon even their most cherished ideas.
If you can alter the soil, that is.
The difficulties are twofold. If you have a young sapling, you can plant it in soil of your choosing. You won't change the nature of the tree, at least not very much, but you can choose whether it thrives or perishes. So too it is, roughly, with ideas at an early stage of development (or indeed when people themselves are young and more readily accept whatever they're told). But just as fully-grown trees prevent soil erosion, so too do strong beliefs help enforce the very ideology that allowed them to flourish in the first place. Ideas, identity and ideology are all interconnected, and although not impossible, it's much harder to get rid of more deeply-held beliefs because they are self-reinforcing.
The second problem is that even if you only have seedlings to contend with, changing the ideological soil over a large area is like being a single farmer with a packet of fertiliser : he might manage something, but not very much. If ideology is shaped by environment at all, the only real arena for changing this is in schools. It's only in schools and other educational institutions that we have a common environment where we all go with the expectation of learning analytical and critical methods. Beyond that, there are very few venues indeed where the entire populace expect and accept that they're going to be taught how to think. Yes, wise people expect to keep learning forever, both in terms of raw facts and analysis techniques. But a lot of people don't seem to expect this at all, and I would guess that the vast majority don't assume their interpretation is likely to be wrong. The default for most of us is to assume we're capable of making sensible judgements in most things we encounter (if we were wracked with self-doubt every time we decided which pair of socks to choose, life would suck), and there are few mainstream venues where people expect this to be challenged.
So yes, changing the soil would be very nice, but as individuals we can't do very much here. And, just as mighty oak trees are highly resilient, it would be of limited help against ideas already prevailing in society anyway. We have to deal with them by another means.
Change the environment
Acorns need soil, but also water, the right temperature, and as seedlings they need oxygen, carbon dioxide, light, not too much wind, and a host of other factors. Some things simply have to be taught. Someone might be more ideologically predisposed towards solving their problems violently, but they'll never become a great warrior unless they're taught how to fight. And - the Vikings come to mind again - a naturally ability and even inclination towards violence doesn't mean this will definitely be desired, much less actually expressed.
This category gets a bit complicated though, because the environment in this metaphor is both raw, objective data and the subjective opinions of other people : the trees are part of their own environment. But there are other components too. Instead of the simple network we saw in those cute little simulations, picture a vast, three-dimensional web of information with strands of different thicknesses and multiple different connections between the same sources. Because we don't just have other people or even directly relevant information to deal with, oh my no. We have information and indirect but powerful, unconscious influence from a huge number of other sources : newspapers, television, radio, works of fiction, art, educational institutions, books, magazines, word of mouth, social media, laws, elections, book clubs, sports events... God knows what else. As we saw previously, as well as how thinking shapes behaviour, so behaviour can also shape our thinking.
This importance of this "extelligence" is hard to overstate. For instance, growing up in medieval Norway, your options were basically trying to live in any icy wasteland or go and rape and pillage your way across Europe. There wasn't anywhere you could sign up to a degree in bitcoin marketing or lifestyle management. And it's incredibly hard to break out of the system you're embedded in, to realise that there were other possibilities you hadn't even considered. The institutions we create are designed around, and therefore reinforce, our most basic ideologies, the ones we don't even realise we have. Their mere existence shapes not just behaviour, but our very thoughts. Who grows up dreaming of being a fashion diva when the typical lifestyle choice normally involves a great big axe and lots of dead Frenchmen ?
Of course, many of the institutions are shaped by technology and economics just as much as ideology - it just isn't possible to become a professional website designer if there aren't any websites to design. There are also smaller, everyday ways in which behaviour controls thinking, e.g. monkey timesheets. One short quote captures the essence of the problem very nicely : "It’s the fallacy of believing that all the facts you know are all the facts there are." The problem is that we often don't have any other option than assuming that we have all the relevant facts, or we'd hardly make any decisions at all.
For all that we might mock loonies who tell us to "wake up sheeple !", people do seem to be products of the system to a very large degree : Americans are mostly descended from Europeans, but have some radically different philosophies; most European countries today are vastly less warlike (despite some qualms that this may have been exaggerated) than they were during most of the last century. What's widely accepted as moral, inevitable and normal today looks very different to what it did a century ago. Envisaging a new way of doing things is hard : we trap ourselves in ideological webs of our own weaving. You can't fight the choices you don't know you're making.
And yet... changing the system can and does happen. The paradox of environment is that it appears both strong and weak. One way to square that particular circle - if fucking with geometry is your thing - might be that our social institutions exploit the relative, recent nature of our default comparisons. They do have a role in making change genuinely difficult, and therefore (whatever Kennedy might say) undesirable, but what they mainly do is make change appear more difficult than we tend to believe. They make the battles harder to fight, but more importantly, they influence which battles we fight at all. If we can break through that, then there are other factors we can exploit : social norms are not the only thing shaping our ideas. Don't get me wrong here. Major social change is extraordinarily difficult, otherwise every nutcase under the Sun would be remaking society every five minutes; cultural norms do change, but some institutions are astonishingly resilient.
Where it gets really complicated is that our very concept of how to organise the flow of information and the structure society is itself an idea. So our tree analogy is a fractal metaphor. Oo-eck. It's a bit like saying that you could re-arrange the trees in a forest and suddenly find that instead of another forest you've ended up with a toaster. And this network is at least partially self-aware...
This is getting silly. The structure of society is best dealt with as a completely separate topic, even if that distinction is somewhat artificial. For now, let's concentrate on the basic-level ideas, and worry about the changes to the system this will require and induce somewhere else. It's good to be aware that changing ideas can cause this incredibly complex feedback though. In any case, most of the structural changes are beyond the reach of most individuals, with the exceptions of political leaders, media moguls and other exceptional influential individuals and organisations (some of which options we'll look at later anyway). That's not to say that when an idea does take hold in a big way it can't enact radical societal change : changing an idea can change society, and changing society can change its ideas.
It's all bloody complicated is what it is.
Chopping down the trees
Sometimes it has unexpected consequences. |
We looked at last time as a discussion between individuals. The concept of a web or forest, in which someone's individual belief is sustained by this vast, intricate support network, raises the complexity of the challenge. And we know the simulations described earlier are lacking a lot of important details, as the website itself mentions, for instance :
- The strength of individual believers. Some are devout, others may be more casual.
- The possibility of multiple connections between individuals, i.e. multiple communication channels.
- That individuals have prior knowledge and opinions to compare new ideas with, so will have varying levels of susceptibility and interest. Accepting one idea will affect the likelihood of them accepting other ideas : fighting one idea with another is possible but very difficult.
- The social praise/shame coming from multiple different sources.
- The possibility for ideas to mutate, i.e modification by their believers to make them compatible with the ideology (e.g. the way the term "fake news" has been twisted by those who were originally its targets).
- That individuals will trust each other differently on different issues and interpret the same evidence in different ways.
- Whether or not other connected sources are neutral on an issue rather than actively opposed to it.
- That believing something doesn't mean that one will directly spread it around.
- That there are cases where an argument may backfire, sometimes causing the individual to actively argue against an issue.
It's worth remembering that there's going to be a big selection effect at work. If you encounter someone actively expressing an opinion you disagree with, it's more likely they'll be seriously committed to that idea (rather than the typical, "I heard this from someone and have no reason to doubt it" level of acceptance). And if that's the case, then we know that a direct argument against it - unless done with the greatest of care and in the correct circumstances - is unlikely to be successful, and could make things worse.
This also helps to put ourselves in context. Again, our egotistical brains deceive us. When we start arguing, we assume that we become our opponent's most important, dominant consideration. In actuality they're still embedded in that huge support network, and often we don't even know where they are before we begin. We might be adding a mere trickle of doubt against the flood of certainty assailing them from other, unseen directions. Or in practical terms it's like saying "cats are sooo cute !" to someone who's so allergic to kitties that even the sight of a paw print sends them into convulsions.
A waterfall blowing backwards in strong winds. Any amount of water can be moved with a strong enough wind. Likewise, it's not the sheer amount of information we provide that matters. It's how much we provide compared to all other sources, and potentially the number of independent sources as well. |
As mentioned last time, more often than not direct observation simply isn't possible. There is usually no choice but to rely on the other options (trust, consistency, existing beliefs etc.). That support network of other trusted sources isn't incidental, it's absolutely crucial : once a source has been established as credible, it will become as important a source of evidence as direct observation itself; evidence without trust is not evidence at all. Anything that contradicts trusted information will, so long as it only makes up a small fraction ( <10% or so, as we've seen) of the information flow, be distrusted. We seldom if ever evaluate evidence solely on its own merit, and for good reason : to the brain, things that trusted people say are at least as good as facts, if not treated as facts themselves. Which means that a source can become distrusted and its argument backfire simply by saying something radically different to what everyone else is saying.
This is not quite the same as saying an idea will backfire if it contradicts what we already know, which we've already looked at. To put it another way, suppose I give you a multiple choice question, say, "what's the main driver of galaxy evolution ?". To make it easier, I get four different people to suggest answers. The first three are lab-coat wearing scientists while the fourth is wearing black turtleneck* and smoking a cigarette on one of those long cigarette holders that only very pretentious artists and puppy-murders use. So is the answer :
* Obviously super-evil, then.
UFOs provide a good example. Unless you happen to be on the scene when someone claims they're seeing a flying saucer, there's not much you can do by direct observation to test if you agree with them. So if lots of people you trust start telling you that they saw a flying saucer, or even if they just keep telling you that they believe in them, you're exclusively reliant on them because you have no direct observations of your own. In the right circumstances, it can then be very easy to fall into the UFO network. But it's even easier to fall into the UFO-doubting network, because that's much larger (see also organised religions*). In both cases, belief is sustained by groupthink : there are so many connections between group members that they all believe the same thing, and any new idea becomes very hard to establish. This is perhaps the case of too many connections both helping and hindering the propagation of ideas par excellence : like-minded individuals have bonded together so strongly that they're barely able to think for themselves any more. The belief might not be caused by groupthink, but it is sustained by it.
* Regular readers know full well that I don't mean this to utterly dismiss the possibility of supernatural deities or even alien visitations. I'm gonna assume y'all intelligent enough to get that.
Of course, it's not impossible to break groupthink, it's just very difficult* (at least if we insist on maintaining the same group dynamic, but more on that later). Individuals still do have those other judgement calls they can make, but it takes a huge amount of effort. This goes some way towards explaining how societies have come to accept radically different things as "normal" over the course of their histories.
* You'll always get some fraction of the population who are prepared to believe anything in any circumstances, regardless of how rational it is. This may be the result of the sheer complexity of the brain, and/or it may even be an evolutionary safety measure designed to prevent an excess of groupthink. And of course individuals will always weight the contributions of ideology/observation/trust/etc. differently to each other.
As far as arguing with people on the internet goes, we should remember that people primarily favour local information. If you tell them that 97% of scientists don't believe in UFOs, but 97% of their friends do, then they'll believe their friends. Which is in some ways very rational. Rather than believing the claims of people they've never interacted with, they're accepting the claims of people they have direct access to and already trust. You, a stranger could be lying, but they "know" with certainty what they're friends think. From their perspective, so many trusted people supporting the same conclusion is extremely powerful evidence that it can be taken as fact, and so dissenting views become (as we've seen) merely evidence that someone is biased or stupid or ignorant or whatever.
The problem is that direct experience has a fundamentally stronger affect on the brain than stating intellectual facts : people might well accept that most others don't share their views, but they know full well, on a very deep level, that most of their trusted friends do. So while they may consciously know about how unusual they are, they may not, unconsciously, believe it. Their implicit bias is that they are normal and everyone else is a weirdo. For them to become truly aware of how strange they are, they may have to actually experience the outside world, not just be told about it. Otherwise their limited interactions will have the same result as if most of humanity didn't exist. Once again, the differences between knowledge, belief and behaviour become very important.
All this should be enough to explain an apparent paradox. We know that people are persuaded if enough of their information sources tell them the same thing, yet some people are strongly contrarian who relish in rebelling against the Establishment : the appeal of being a supposedly oppressed minority is that it makes you feel special. A groupthinking filter bubble is one explanation - members are essentially unaware of the other viewpoints and their own sources are all chanting the same thing; knowing the outside world exists is very different from experiencing it. A lone individual going against the local consensus will be perceived very differently than the case of them being immersed in a different culture. Or worse, they may be indoctrinated in a bias spiral, so that they interact with the outside but view contrary evidence only as evidence of bias against them. They may be treated with hostility by the outside world but praise from within their own circles. Cognitive ease could also be a factor - if they haven't learned (or just aren't capable of applying) the mental tricks needed to process certain ideas, the ostensibly simpler ones may dominate their thinking.
Hence, establishing more connections between people has incredibly complex consequences : it can both encourage and suppress ideas, make some people more open but force others into shells. I don't think this is fully understood, but my guess is that for some, knowing that they're in a minority has the appeal of making them feel special, whilst feeling that everyone else around them thinks as they do emboldens their belief. Learning contradictory information can sometimes cause delight, but not usually if it goes against something we have a strong emotional stake in - even for scientists.
And moreover, if you tell someone in such a situation not to believe in UFOs you're not merely asking them to give up a single cherished belief, which would be bad enough. You're also asking them to admit that their trust in a large number of people has been mistaken, that their whole basis of evaluating information is flawed, that nothing they thought they could trust is correct. You're inevitably not just talking about the issue, but their faith in and friendships with other people and their own capacity for rationality. If you tell them that all of that is wrong, well, no wonder it rarely works. And maybe it also explains why we get so passionate about issues with little or no real evidence, e.g. religion : we don't have any facts to argue over, so instead what we're doing is shadow boxing against less tangible things like trust and ideologies. Those are far more potent ways to summon up the blood than arguing about statistical significance or reliability of evidence.
Of course, it doesn't necessarily follow that a strong consensus is only ever due to groupthink (though it often is). After all, UFOs aren't flying saucers and the Earth is round, not flat. But it's clearly possible for correct conclusions to be sustained in part by this mechanism. And while it can be very strong, it is not unbreakable. It's just that the mechanism needed to break it up is probably not mere persuasion, as we'll get back to shortly.
The final point I want to make is that it's often said (quite erroneously) that millions of people can't be wrong, while such people also tend to ask in all sincerity the rhetorical question, "well would you jump off a cliff if everyone else was doing it ?". The simulations described thus far are partly an attempt to illustrate that crowds can be both wiser and stupider than the sum of their parts : it depends strongly on both the people in the network but also the structure of the network itself. Which is why, perhaps, the competitive collaborations of modern science that I'm always banging on about should give us tremendous confidence in science as a whole, even though any individual institution can still fall victim to groupthink. The idea that established "dogma" always falls to plucky outsiders is pure nonsense.
Our default assumption, it seems, is to assume that people are always wiser in large groups. And that's generally true if people reach conclusions independently, especially if they all have different innate preferences and biases : if liberals/conservatives/wealthy/poor/clever/stupid/angry/calm people all conclude the same thing, it's probably just true. With any luck, the assorted recent crises in democracy in which narrow wins have elected lunatics should give everyone pause for thought that maybe this assumption isn't the best one, while at the same time, the idea that everyone can and should always think entirely independently is at least equally monumentally stupid as well. Persuasion is a very real thing, and if you succeed in persuading someone, you bear a measure - not the whole measure, obviously, but some - of responsibility for their actions.
At the most fundamental level, there's no difference between information people tell us and our direct observations; there is no foolproof guide to objectivity, only assumptions we can make which are relatively good or bad. So to the brain, in the right conditions, opinions can really be as good as facts. If all your friends jumped off a cliff, then chances are there'd bea great big dinosaur trying to eat them a very good reason for this so you would too. You might do it out of sheer instinctive herd-following, or you might see the reason for their strange behaviour and reach the exact same conclusion.
All of the possible ways to control information discussed so far appear to have major problems. Changing people's methodological reasoning is possible, but in practise only at a young age so would have no effect on the older generations (the ones who actually run things). Persuading people that ideas are wrong is possible, but very difficult to do on a large scale because the networks that lead to and sustain their ideas still exist. Major social reform is inherently difficult. But are there at least options which could be deployed in accepted theatres of debate that would be effective ? I believe there are.
Managing the flow
Managing the whole environment of a forest is very difficult. But what if there was already a favourable environment somewhere else ? Maybe our ideology only seems hard to shift because we're usually so deeply rooted in this vast support network. To alter the metaphor slightly, what if we transplanted a single tree - in this case referring to a person rather than an idea - to somewhere new ?
If the basic assumptions discussed so far are correct, then this should have a profound effect. In a new environment, people are forced to make entirely new connections and are praised and (socially) punished for entirely different actions. If that environment favours different views to what they were used to, we can expect at least some of their ideas to change (this is in part how rehabilitation works, after all, though obviously there are some major caveats to that). Not all though, because strongly contrasting viewpoints can persist in even extremely hostile environments. Indeed we could expect some individuals to hold even more strongly to some of their beliefs. But most, the theory suggests, ought to change many of their ideas as they become integrated into the new setting. It's those who don't integrate (which could be the fault of themselves or others) who we might expect to go the opposite way, who perhaps hold those ideas as such a core part of their identity that to relinquish them would be an act of genuine psychological self-harm.
What this absolutely definitely should not be interpreted as is the cruder notion that everyone hates each other because no-one's really listening to each other, and if we could only start listening then everything would be fine. It wouldn't. Hatred can arise precisely because we've been listening, because we've heard the other side's moral views and find them disgusting. Just throwing a bunch of diehard vegans together with a bunch of fox hunting enthusiasts is hardly likely to result in anything other than a bloodbath. So we shouldn't be the least bit surprised to learn that getting people of different political allegiances to communicate more directly on social media leads to increased polarisation, not less.
Flow rate is another related factor. We know that repetition has the advantage of cognitive ease. But as with the other parameters, flow rate may backfire above a certain threshold - if someone never shuts up about the same boring issue, we stop listening, and if it doesn't actually affect our own belief directly, we may well see them as biased. Too much confirmation can also be a sign of groupthink. So the ~30% threshold mentioned earlier might be true for the number of different sources telling us the same thing, rather than the amount of information on one topic coming from a single source. Yes, we can succeed in changing people's minds if we make more connections and give them enough different information that contradicts their view, but no, we definitely can't do this just by whacking 'em together or bombarding them with different arguments. And that's not even accounting for trust and all the other issues that affect persuasion between individual sources.
There's no doubt that echo chambers and epistemic bubbles can lead to polarisation, but there's more to it than that. You can't break such systems just by adding more connections any old how - that's stupid. Information flow rate has to be high enough to cause persuasion but not so high that it backfires, so adding more connections willy-nilly is by no means likely to get the flow rate just right. And remember the threshold-like properties of beliefs, that techniques that strengthen existing beliefs are different from the ones required to change stance. Using methods that reinforce group allegiance isn't necessarily a good way to gain new members (which is where most political activism seems to commit a grievous error).
While slightly more sophisticated variants of "wake up sheeple !" may help break people of their existing group identities, it turns out that the best way to get people on board isn't to avoid group mentality (humans are social whether you like it or not), but to exploit it : make them part of the group you want them to participate in. Again, you can't do this any old how. According to psychologists, you should bring people together who are of roughly equal social status, on an equal and fair footing, make them interdependent, encouraging empathy, and most importantly of all, giving them a common goal - ideally a common enemy or other threat.
Overcoming strong polarisation is undeniably difficult at the best of times, and one might think that discussing politics on social media is about as hopeless as getting Brian Blessed to teach a class of children who are frightened by loud noises. As one researcher put it, "politics makes us mean and dumb", which is hardly good news for rational debate. And yet, rather impressively, there have been experiments in which social media was used to get people to genuinely, successfully work together on a politically-charged issue.
How ? By simply avoiding any labels of political affiliation. Just as emotion-driven, identity-driven rhetoric can drive us apart and actually make us stupider, so evidence-driven, identity-free discussion can help us cooperate (and perhaps having a common task in that example also played a role). Well, maybe. I for one would like a lot more studies into the conditions that make us more rational, rather than concentrating on the ones that make us behave like faeces-hurling chimps as seems to be the standard.
Regardless of the precise circumstances for rational behaviour, it's safe to say that this is not easy in reality. Worse, the people in most need of reform are those who are also the more annoying to actually talk to and difficult to reach. Even negative emotions have an appeal, a physically different effect on the brain. But it does suggest that polarisation can be broken, perhaps through incremental steps, discussing common ground wherever it exists and avoiding the (identity) politics that makes us behave so irrationally. To have more sensible political discussions, it turns out the first step might be to stop talking about politics for a while.
Still, it does seem that building more connections is in a principle a good way to prevent the flow of ideas. This has the added benefit of an ideological appeal, that you should fight speech with speech* rather than silencing your critics and presuming that you're right and they're wrong. What the network approaches reveal, though, is that it's not just the counter-ideas or even the people saying them that matter, but the whole structure of the network in which people are embedded. So alas, merely trying to build bridges with people and avoid the difficult topics until you've established trust may not be enough - at least, not if we want to persuade large numbers of people. That doesn't mean we shouldn't do it, just that if we want to have an effect on society we have to take a different approach. And simulations suggest there might be something we can aim for.
* I discuss the ethics of this here under the section "false symmetry".
Two few connections and ideas are stifled. Too many and they become too resilient and thinking becomes rigid. But just the right amount and setup gives you a small world network, optimised for the flow of ideas as complex contagions rather than simple viral propagation.
But hang on, didn't we set out to stop the flow of information, not aid it ? Well, yeah. The small world network is only of use to us if we employ all the other techniques discussed last time. It's just that in that situation we have a much better chance of our ideas actually influencing people : if we don't like something, we have a much fairer chance of stopping it than in the groupthink echo chamber situation. This setup should ensure that groupthink never even happens, so we shouldn't have to deal with rabid levels of hyperpartisan polarisation in the first place. Rearranging existing networks into such a configuration (so long as we remember the stance-changing persuasive techniques) might give us a much better chance of overcoming ideas previously thought to be intractable.
Isn't this all a bit complicated though ? Wouldn't it be much simpler to just sever the connections like in the left panel, preventing information from flowing at all rather than carefully managing it ? Chopping down an entire tree is hard work, but cutting off a branch or two is much easier.
Cut off the seeding branches
If we can't stop simple, obvious ideas or basic facts, what about more complex, less popular ideas ? We know that changing people's interpretive methods and/or ideological stance can work, but that's hard to do successfully. We also know the spread of information in general is highly complex, including the strength and stance of belief, trust, curiosity, flow rate per source, cognitive ease with which the idea is described, type of communication channels, number and type of connections, number and type of independent connected sources, and probably more besides. We also know that feedback between ideas and people will be highly complex, with acceptance of some ideas making it easier or harder for them to accept others, and trust in the source being linked to the ideas they're spreading.
This is not an easy problem. However, it might not be hopelessly complicated either.
In the first post we saw how curiosity and flow rate could work against each other. To an extent, restricting the flow rate of an idea induces greater curiosity because we naturally want things which are rarer. This provides us with a motivation to seek out the forbidden knowledge... but only to a point. Beyond a certain difficulty level, i.e. if the flow rate is sufficiently low, we decide that the reward isn't worth the effort and it becomes undesirable. We've also seen that there's a big difference between knowledge and belief : when we do uncover the hidden information, the fact that it was hidden doesn't necessarily make us more susceptible to believing it.
And even if we resent whoever enacted the restriction, that doesn't mean we'll likely to believe whatever they're hiding either. We might not even have the wit to understand it, much less accept it - like digging for buried treasure only to find a document entitled Quantum Electrodynamics : Theoretical Applications in N-Dimensional Feline Space With Extra Difficult Equations : Volume V.
No-platforming efforts appear to be generally successful though. They attract a bit of attention but quickly become non-stories. Far from fanning a Streisand effect, it seems that giving these people too much airtime is a far better way to disseminate and normalise their views. Or, what's more likely to maximise flow : giving them a prime-time slot on a major T.V. channel, or not giving them a prime-time slot on a major T.V. channel ? Obviously it's the first one, because shouting from a soapbox just can't compete with arguing with David Dimbleby in front of millions of people*. Whether or not they'll be successful on T.V. depends on all the factors we've discussed here and in part two. Sometimes putting lunatics on television just exposes them as lunatics and makes them more despised rather than winning them support; exposure can, in the right conditions, actually quench an idea rather than fan the flames.
* Provided, that is, that no other comparable outlet replaces them. If that happens then there probably will be a strong Streisand effect - unity of behaviour is important.
The problem is one that Plato exposed at great length and effort : sophistry, that it's also possible to be a skilled rhetorician but suffer from a serious lack of critical judgement (or perhaps even basic intelligence). And we'd all like to think that we're capable of judging based solely on the evidence, but the reality is this is nonsense. Any casual glance through a history book will see people falling prey to monumentally stupid and destructive ideas. Some ideas are indeed so obviously wrong to the great majority of people that exposure only damages their credibility further, but alas, some dangerous and utterly wrong-headed ideas can be appealing. Give those ideas sufficient prominence and this can help legitimise rather than disgrace them.
The Streisand effect doesn't seem to happen with no-platforming because we already know the sort of thing people are going to say and there are usually multiple ways to access them anyway. But with ideas and pure information, that's different. Curiosity is likely to be inflamed : what is this forbidden fruit I must not eat ? What delights am I missing ? Any attempts at an outright ban had better be pretty thorough if curiosity isn't to overwhelm the low flow rate, and in the modern internet era even small leaks have the prospect of leading to a torrential flood.
As we've already seen, superinjunctions are surely the worst kind of ban. They forbid discussion about even the existence of the ban on a particular issue. In effect, they might as well be a locked treasure chest with the words TOP SECRET INFORMATION plastered on the front in big red letters. They not only incite curiosity, but actively and compellingly encourage belief in whatever's hidden because they only ban a very narrow topic, like saying "don't think about elephants". Or, as from last time, "you're not allowed to talk aboutElton John's Bob Smith's trial" is a declaration that Bob Smith is on trial. Technically it doesn't tell you anything about his guilt, but bugger me* it's terribly suggestive. If Bob Smith thought he had a chance of proving his innocence, he'd surely want that to be seen in the light of day, wouldn't he ?
* Please don't.
Not necessarily, of course. But that does seem to be a default, natural reaction for an awful lot of people, and having learned such a titbit, they're not likely to keep it to themselves. This is far different from the case where a source just chooses never to mention something. They could be accused of bias, but it's far more difficult to whip up a scandal for bias than for suppression.
This is all true as far as it goes. But we should remember that hardly any ideas actually do go viral. Curiosity about what an idea is does not equate to promotion of an idea. For example, moral philosophy lectures can essentially grapple with any topic under the sun without anyone worrying that students will be radicalised. Fundamentally it makes no sense at all to prevent all kinds of discussion on any idea - rather, we're only talking about the active promotion of certain ideas, attempts to win converts, indoctrination and the like. Tell me that a group promoting sales of Teleltubby merchandise was banned and I'll have a celebratory cup of tea, not be so outraged by the ban that I'll start a protest group campaigning for JUSTICE FOR PO. Banning medical textbooks which have wrong numerical quantities is unquestionably morally good; banning anyone from ever discussing those values is patently stupid.
So even simple factual statements are fraught with difficulty. But they reveal that sating curiosity can reduce spread. And for more complex ideas, this may be possible without giving the game away : you just need to give the gist of an idea. For example if I'm told a banned document contains the lyrics of all the Teletubbies greatest hits, then my curiosity is already way, way, way more than satisfied, because a little knowledge is quite enough, thank you.
But there are also other really important differences between factual ideas and subjective concepts. Facts can be independently verified, sometimes so easily that bans amount to little more than a declaration that this is something important that they don't want you to know : rarity of knowledge makes it more desirable.
Subjective concepts, on the other hand, are much more complicated. Facts are neither fringe nor mainstream, but their interpretations fall on a broader and much more complex spectrum. An idea held by few people may be harder to spread because its low acceptance rate causes us to label it as fringe, so anyone believing it is weird, and we're clearly not weird so we shouldn't believe that. Whereas if lots of people believe it, well, that's mainstream, socially acceptable, so there's less of a psychological barrier to acceptance. So while techniques that strengthen belief - praise and shame - might not be able to change someone's stance, they might be able to maintain it by keeping everyone in agreement. Sustaining hearts and minds, if you like.
The observational figures suggest that if you can keep acceptance down to ~10% of the population, it will be seen as fringe and disbelief becomes somewhat self-sustaining. It isn't necessary to suppress an idea entirely in order to control it. Or, to modify a famous quote :
The fact that everyone will try and shut you up won't win you any kind of Streisand or backfire effect. You aren't going to win mass converts because most people hate the English rugby team - you'll be, at best, just the local weirdo. Although in this case there will be other effects external to the local network that keep your crazy idea seen as crazy, the point is that fringe ideas don't inevitably spread because of attempts to suppress them - sometimes, those attempts just succeed. Banning people from promoting the Flat Earth wouldn't drive more people to believe in it, because the ban would and could not by itself induce the idiocy needed to believe something so hilariously stupid.
One last thing that should be mentioned : restrictions on complex, esoteric knowledge are routine. Manufacturing processes and essential formulae have been kept secret for decades, despite the curiosity invoked and the money-making potential. Like the other successful cases, these don't totally forbid discussion - they just limit the context, i.e. who can discuss it. The sheer difficulty of obtaining (and in many cases understanding) the secrets of N-Dimensional Feline Space or whatever, coupled with the known punishments enacted for breaking the restrictions, often succeeds in suppressing highly valuable information - even if it would be strongly in the public interest to reveal it. The key here appears to be the flow rate, i.e. few people actually need to know the complete formula for Coca-Cola, whereas in contrast conspiracy theories of the more "internet nutjob" variety are ludicrous because of the sheer number of people they require to keep a secret. And often, curiosity is reduced because we have access (or know we'll have access eventually) to the main results, e.g. I can walk down the street and buy a Coke at an affordable price, whereas obtaining the formula would require years of training (both to become a chemist and to train as an awesome ninja spy) and runs the risk of enormous lawsuits.
All this suggests that the absolute ban of an idea is preposterous. But a ban specifically of the promotion of ideas, that's much more plausible. If a subjective idea is promoted by very few sources, and the ideological climate is poor, then it can indeed be stifled. We can effect some limited control over society. Not all that much, mind you, and nor would such control even be desirable. But some. Perhaps enough, the hope is, to cull the worst excesses without stifling free inquiry and criticism. Indeed, rather than eliciting a slippery slope, a successful control has to recognise that forbidding promotion may not succeed at all if general discussion is also forbidden.
Chopping off the branches : part 2
Banning the promotion of ideas does seem plausible then, at least in the right circumstances. The only remaining question is therefore whether we should - in terms of effectiveness - also or instead ban specific people and other informational outlets. This approach would be far more of a scalpel than a blunt instrument.
Well, the obvious answer from what we've seen so far is very much yes. If ideas largely originate from a single source, then how successfully they spread is critically dependent on how many people that one source can reach. And certain platforms are vastly more effective at generating large audiences. If you target a specific person or media entity and say, "you're not allowed on Facebook any more", and if Facebook is their only outlet and if no-one else is saying the same thing, then you can reasonably expect success. You might make their existing followers more devoted, but even this may only be true of the real hardcore. Furthermore, if the news outlets are exploiting cognitive ease* through repetition, this will stop. By and large, you will effectively have performed what we discussed earlier : a transplant. You've left their followers with no choice but to seek out other information sources which - by necessity - are saying different things. And that might - might ! - just break their groupthink.
* Or at least something analogous, as we'll see in a moment.
It's not quite that simple, of course. It will only work that well if those rather specific conditions are met. But those conditions might not be all that unusual. It turns out that, on the internet at least, real news tends to spread from multiple, independent, centralised sources, whereas fake news - the genuinely batshit crazy stuff - gets most of its hits via reshares from person to person. Which makes intuitive sense : everyone's got access to the mainstream media, so everyone's viewing those sources independently of each other, whereas fake news inherently relies on going viral because it has far fewer major outlets. As well it should : things which actually occur can be seen by everyone, whereas lies tend to be unique to whoever came up with them.
Assuming that the major media outlets are all basically singing the same truthful tune (in very broad terms), then banning only one of them is a singularly terrible idea. The reported information will still be available from multiple other sources which people will flock to in protest, and likely become more receptive to it. In contrast, banning a fake news channel has a far higher chance to be successful, since they all tell different stories - cut off the head of the snake, and the body doesn't have any good reason to suddenly decide that frogs give gay people cancer or whatever they're blabbing about lately.
Which gave me the weird idea that maybe InfoWars has something in common with observational astronomy. After all, science-class telescopes are hard to access, so the community of users is very small, and some observations can take so long to perform that they're never repeated. So as with fake news, there can indeed be only a single source generating unique information. But then, major political figures are also single-point information sources, and like astronomy press releases, statements by politicians tend to reach a whole variety of major outlets. So again, the spread of basically truthful information (I mean in the sense of reporting what people said and did, not necessarily if what they said was actually true) is going to be very different from the spread of lies. That doesn't mean that lies can't spread more effectively when they take hold, just that they do so a bit differently.
Creating something that goes viral, i.e. has appeal to psychological universals, isn't easy. But it's undoubtedly easier to construct a lie that does this than discover a a truth with the same properties, which is why lies often spread faster than truth on social media. It's nothing to do with the "truthiness" per se so much as it is their novelty and appeal. But even while lies can be spread more easily, their unique, individual nature means they may actually be easier to suppress so long as that is done at source. Fighting them after they've spread is a massively harder task precisely because they've been designed to be convincing, in way that's seldom possible with the truth.
Note that this applies to any sort of complex, rare knowledge. Simple things that anyone can observe (and/or infer) are essentially impossible to suppress except perhaps through education*. But a ban on any complex ideas which are unintuitive and hard to form may succeed - here radio astronomy is genuinely more similar to InfoWars than the activities of major politicians. If someone were to take my academic papers offline, burn all the copies and destroy my notes, it'd take me or my collaborators months if not years to recreate**. And if they abducted us too, it could be much longer before anyone tried the same thing. Sadly, in some ways both lies and truths are remarkably similar.
* Probably only in a limited way though, since humans do seem to share some hard-wired tendencies.
** Because we'd be embarking on a campaign of bloody vengeance.
But there are important differences too. A common objection is that if you ban fake news, you lose the ability to track who may be falling victim to it - it goes underground. However, whereas if you banned some kinds of research some of it most definitely would do just that (the kind that doesn't require access to major technological facilities at least), that's not how fake news works. It explicitly relies on reshares to propagate, it needs as much exposure as possible. Going underground wouldn't help it in the slightest, and there's a very good reason for that. The thing is, most fake news isn't driven by a belief in "alternative facts" at all; its goal* is not a misguided attempt at enlightenment in the way that people who genuinely believe in Bigfoot try to do. Rather, it is an attempt to confuse and sow mistrust - not to convince the viewers that anything in particular is true so much as it is to persuade them that they can't trust particular sources (or worse, that they can't trust particular claims and therefore any source making them). It aims to replace dispassionate facts, those pesky, emotionless bits of data that can't be bent, with emotion-driven ideology, which can. It thrives on the very polarisation that it's designed to promote, as well as the erosion of both critical and analytical thinking that it exacerbates. Removing it is likely, in the long term, to do far more good than harm.
* For a really weird counter-example, one which is so weird I'm skeptical of the claim, see this.
...provided, that is, that it's removed wholesale, not piecemeal. Again, a single platform banning it may only drive traffic flow to the others, strengthen existing belief, and excite the curiosity of the previously uninterested. By contrast if such content is kept very scarce indeed, then its fringe status becomes self-maintaining. Beyond that threshold, the rarer it is, the lower the flow rate and the fewer people it can ensnare.
For the sake of cognitive ease, I repeat : fake news doesn't (just) happen because some nutter is genuinely convinced of the healing power of dandruff or whatnot. It's deliberate propaganda, but persuasion is not its main objective. Only a very small fraction of viewers will actually be convinced by any particular article, but their small numbers don't matter. Those unfortunate few will become vocal, prominent figures, so instead of having to debate whether we should increase taxes we end up discussing whether the Loch Ness Monster was a Nazi. Yay. The most effect this has regarding actual persuasion is that by positing absurd, extreme views as legitimate, it may shift the Overton window. This exploits the relative nature of our default comparisons : if people start to apparently believe in Nazi lake monsters, then discussing whether we should make Mexico build a wall looks practically sensible.
But the main point is quite different : if you can't convince, then confuse. If you get people to doubt everything, then they believe nothing. Exposure to a little bit of fake news does no-one any harm, and might even make them more skeptical and cautious (this is the basis of the "inoculation" approach to combating it). Again though, everything has a threshold. Too much misinformation doesn't make people super-duper critical thinkers at all - rather, they end up basing everything on ideology and emotion. That's why such tactics can get away with almost comical levels of farcical claims, because as long as they reach enough people, they're bound to fool some of them. It's much like why spam emails are so badly-worded : they deliberately try and avoid skeptics in order to reach the more gullible.
Note that this is quite unlike the tactics of ordinary politicians we're more familiar with - they want to actually persuade us, and know that saying enormous amounts of unconvincing lies won't work. Consistency is a prerequisite to persuasion. The goal with internet-based fake news, on the other hand, is saturation bombing, to try and overwhelm the flow of credible information at least for whoever might be vulnerable (again, the relevant flow of information is local). It doesn't have to reach everybody to cause wider problems. It doesn't even matter if a fake news source is self-contradictory. The fake news outlet can keep going so long as it has sufficient resources - a politician trying the same thing will, unless things have gone horribly, horribly wrong, be voted out of office, or at least remain a minor figure on the sidelines.
Unless things have gone horribly, horribly wrong, that is.
As far those who aren't so convinced, they don't escape unscathed either. Knowledge of what's being done means they can't be certain if someone is arguing sincerely or deliberately trolling. This explains why anyone would bother, for instance, writing negative comments about the latest Star Wars movie. It's low-hanging fruit, easy to play on emotions, and exposure of what's being done only makes people trust the internet even less.
And there's another important subtlety. If you're not convinced by fake news, you might still believe that other social groups are. What it's doing here is handing people ready-made straw men : arguments which are much more absurd than anything that someone really believes (yes, some people believe in higher taxation; no, no-one believes the rich should be hunted down and eaten*). The fact that they're easy to debunk actually becomes a very powerful asset. Instead of discussing the details of opposing but equally sophisticated (and boring) fiscal policies, we end up seeing the other side as believing in exciting but incredibly dumb things like flying squirrel monsters from Uranus, or whatever. It wrecks the credibility of the other side and makes them appear far, far stupider than most of them really are. The other side become self-evidently pantomime villains and therefore anyone agreeing with them is obviously stupid... again, as we've seen previously, this is a route to crude, absolute thinking, the polarisation the fake news creates also driving its own spread.
* Right... ?
It doesn't always work, of course. It doesn't have to. It just has to do some damage. Fake news isn't about encouraging rational debate, it's about shutting it down.
The Exceptions That Prove For The Rule, Or At Least Provide Interesting Evidence For It
Hopefully it's apparent that from all this that not all bans will work in all situations - such a degree of control is impossible and undesirable. Cutting connections can, sometimes, actually lead to a more linear, viral flow of information. The network structure of followers as well as instigators is important, as is the strength of their induced belief - dealing with a bunch of still-devout acolytes will be different from dealing with the more casual sort; if we ban a particular outlet for saying certain things but fail to ban any replacement that emerges, well, that clearly won't work. And if we cut the connections between groupthinking collectives, but don't do anything about their internal connections, we'll still be left with bunches of groupthinking morons, floating around in a void of blissful stupidity.
In some situations, especially where multiple sources are saying the same thing, adding more connections - fighting speech with speech - may indeed be a much better option than regulation. Here's a particularly nice counter-example : revenge. Violence, like ideas, can arise due to people making independent observations and concluding that they must act violently for a host of reasons, some much better than others. But it can also happen in response to violence. What research in social work has found is that if someone shows signs of potentially responding violently due to provocation, they can be prevented by "interrupters", who essentially talk them out of. This isn't easy - it requires trust and credibility in the eyes of the would-be criminals, careful timing, and large numbers of social workers. It's not perfect either. But it does work.
For all that belief is not behaviour, the similarities to the spread of ideas are striking. By adding in more connections - not just any connections, but the right type of connections at the right time - they overwhelm the belief that violence is necessary. Regulation to prevent violence already exists, but this approach of adding more connections, of treating it like an epidemic instead of being solely due to personal faults, is more successful.
We might be able to apply similar tactics to ideas. While fake news dies if it goes underground, other ideas do not. So in some cases, the ability to monitor could indeed be an advantage - provided we don't limit ourselves to monitoring. If we detect someone going over the edge of crazy and successfully intervene, they'll be in a much better position than anyone else to disrupt their former groupthink bubble.
While this case does illustrate that sometimes the answer can be more speech rather than less*, it also reveals something else. Sometimes people claim that things like fake news or certain so-called politicians are symptoms, not causes. We see here that just like with a disease, sometimes a symptom can also itself be a direct cause. After all, the very reason viruses give us symptoms is to spread themselves around. If you had a perfect cure for the common cold but couldn't give it to everyone at once, you'd never eliminate the cold virus because it would continue to re-infect people. Similarly, even if you could devise a perfect way to disinfect victims of propaganda, you'd still have to stop it in order to both prevent its continued spread and its damaging effect that makes victims harder to treat. Cures, treatments and vaccinations are different and as such they must be applied differently.
* Interestingly, shouting at people so loudly they give up trying to speak can sometimes be more successful that carefully reasoning with them.
It's a bit like how social prominence can be self-sustaining, how behaviour sometimes causes belief rather than the other way around. Why is the Queen so popular ? Largely because the media chose to make her popular. Once you reach a certain status level, the authority and credibility that bestows becomes very hard to erode. Popularity becomes a symptom and cause of itself. So whether a cause is proximate or ultimate, you still have to remove it.
By this point I hope it's clear that the same regulations can have different effects in different places, depending on the prevailing culture and opinions. Too much oppression causes rebellion, but unrestrained freedom doesn't work either. When we talk about crime in general, no-one questions that punishment can be an effective deterrent - even for war crimes that seem to be have been an "accepted" (I use that term very loosely) part of human nature since basically forever (though it doesn't work for psychopaths, and not always for more typical people either). Methods of reform are much more controversial, but the basic premise that rules can change behaviour in the intended way is essentially taken for granted. It's only when you push those rules too far, to try and change behaviour far beyond the accepted norms, that they break down.
Let's take drugs. Wait ! I mean, let's consider drugs, obviously. It's an open secret in the Western world that cannabis is basically harmless. Hence, responding to cannabis users in the same way as heroin addicts seems to be causing a slow but gathering backlash; there's less of a moral leap from an illegal but harmless activity to the seedy criminal underworld. Recently, the U.K. banned substances that were previously legal : this appears to have resulted in a drop in use, but now users are harder to monitor because the business is off the grid. Portugal, on the other hand, experienced a significant reduction in drug-related crime through less restrictive, more progressive policies. More draconian anti-gun policies, on the other hand, are overwhelmingly successful.
Clearly, the fact that over-regulation can indeed backfire does not mean we should have no regulation at all. Thresholds again : the effect of a totalitarian ban on discussion of any kind is likely to be utterly different to lighter restrictions; relative comparisons also mean that we'll always perceive problems even if things have profoundly improved. Anyone expecting a perfect solution is is quite dangerously mad, and simply doesn't understand people at all.
We also have to recognise that each system has connections and structures that are unique to itself, so that the kind of behaviour we might (have to) tolerate on the internet might be different from what we might be prepared to accept over dinner in a fancy Parisian restaurant. Social media is a very different beast from traditional communication methods : it allows us to meet new people on an international scale, hold group or individual discussions, keep our thoughts private or public, edit what we said retroactively, select what we listen to and what we filter out (in many cases there are also filtering algorithms at work), communicate with both images, text, video and emoticons, and choose if and how what we share gets propagated further. It is not remotely like email or phone calls or television or writing letters or town hall meetings. It's something different. It offers the huge complexity of meeting new people in a completely different way than we're used to, with radically different communication tools combined with the ability to get organised.
Some of the problems of the internet might well stem from its increasing centralisation. Yet the bygone era of geocities and WinMX chatrooms wasn't some Golden Age of Freedom - there were just as many crazies around back then as there are now. This has led to the suggestion that the crazier aspects of human nature are just that, a part of our nature, the ideological/analytical "soil", if you will. You'll always get some radically crazy ideas popping up, no matter what you do. They only seem more prevalent, perhaps, because social media has been a much more effective tool for them to get organised. Which, I suggest, doesn't actually assist the real crazies all that much (because they're too busy wiping the drool off their keyboards) but it does for those who have the kind of appealing-but-wrong ideas that ultimately prove so destructive. Those ideas may indeed be genuinely fostered by the internet.
Not Just The Thought Police
My final point is that there are different approaches to regulation. Even within the strictest approach of legislated bans, we've seen that it would be pretty mad to try and ban all forms of discussion. It's the successful promotion of ideas we're trying to prevent, and we can't very well do that if we prevent people from explaining why they're bad ideas in the first place (the backfire effect notwithstanding). Restrictions on raw factual information are occasionally possible and necessary, but rarely (if ever !) desirable if we're trying to shape thinking in a positive way that promotes free inquiry and critical discussion. No, in this context it's only the promotion of subjective interpretations where there's any real prospect or desire for success. Granted, "promotion" is itself subjective and that's going to make some people uncomfortable. But really it's the same with enormous areas of legal judgements. We don't have a perfectly objective basis for determining reality, so the question, "who gets to decide ?" isn't actually a sensible issue at all - outside the philosophy class, at any rate.
Yet regulation doesn't have to be as blunt* as that. The UK's laws, which are currently still moderated by the European Convention on Human Rights, are complicated to say the least - they go a step beyond the subjective idea of "promotion" to include the even harder to judge "intent". Perhaps more interesting are the provisions in the broadcasting code (different rules apply - or more usually don't apply - to newspapers, just to make things even more confusing) that cover due impartiality, accuracy and representation. Representation means we get, for example, people of diametrically opposed political views appearing on the same media outlets. Sure, T.V. and radio channels can have biases**, but they aren't hyper-polarised. There's no equivalent of Fox "News" on UK TV (look instead to the tabloids for that).
* I've decided to completely omit the issue of punishments, of which many options are possible.
** And even host diametrically opposed hosts, such as LBC radio which features both James O'Brien (an incredibly angry yet perpetually bored hardline Remainer) and Nigel Farage (a toad that speaks).
An even more subtle approach would be not to regulate what people say, but the business model of media outlets. Ideologies are one thing, but goals and incentives are another. Organising these systems thus need not even raise the moral dilemmas of particular topics, people, or speech regulations in general : for example, discussing whether institutions should be public or private, their oversight, taxation, the number of outlets an individual, company or other agency can own, the amount and type of advertising revenue they're allowed. (for why this matters, see "clickbait"). There are, I think, cases where we can rationally say, "the promotion of this material should be prohibited", but in general, regulating specific topics and people is something to discourage if not avoid. After all, as we've seen, the most effective way to prevent an idea from spreading is that no-one ever raises it at all.
Summary and Conclusions
It's horribly complicated. Please don't go away thinking that I'm fully convinced of any of this. Rather, think of it as an attempt at an investigation - some of it will be wrong, but hopefully some of it will be right as well. As it stands, we've seen that yes, there are some cases where regulation is a terrible idea and others where it merely stems the tide. But in some particular cases, allowing ideas free reign is far more likely to attract widespread support than any amount of restrictions.
In the first post I tried to generalise the conditions for successful persuasion by looking at the extreme cases. Let's see if we can do the same for restricting the flow of information. According to what we think we know, it follows that a ban will be in general effective if :
This is not quite the same as saying an idea will backfire if it contradicts what we already know, which we've already looked at. To put it another way, suppose I give you a multiple choice question, say, "what's the main driver of galaxy evolution ?". To make it easier, I get four different people to suggest answers. The first three are lab-coat wearing scientists while the fourth is wearing black turtleneck* and smoking a cigarette on one of those long cigarette holders that only very pretentious artists and puppy-murders use. So is the answer :
* Obviously super-evil, then.
- A) Feedback from stellar winds
- B) Collisions with other galaxies
- C) Supernovae explosions
- D) Pope Gregory IX's cat vomiting everywhere
Then the odd one out is obvious even to someone who knows nothing much about galaxies at all. So here's my guess as to why anecdotally the backfire effect seems easy to induce by merely stating the issue but this doesn't always happen under controlled conditions. In real life, we have multiple sources competing for trust, multiple answers to choose from, and some issues are discussed frequently (cognitive ease again !) while others rarely get a mention. It's this highly complex mix, I think, of multiple options, competing sources, varying trust levels, and varying issue discussion rates that might explain why sometimes the backfire effect seems to happen very easily yet sometimes not at all. So yes, it does help to use the persuasive techniques discussed last time - but complex network effects can override them.
UFOs provide a good example. Unless you happen to be on the scene when someone claims they're seeing a flying saucer, there's not much you can do by direct observation to test if you agree with them. So if lots of people you trust start telling you that they saw a flying saucer, or even if they just keep telling you that they believe in them, you're exclusively reliant on them because you have no direct observations of your own. In the right circumstances, it can then be very easy to fall into the UFO network. But it's even easier to fall into the UFO-doubting network, because that's much larger (see also organised religions*). In both cases, belief is sustained by groupthink : there are so many connections between group members that they all believe the same thing, and any new idea becomes very hard to establish. This is perhaps the case of too many connections both helping and hindering the propagation of ideas par excellence : like-minded individuals have bonded together so strongly that they're barely able to think for themselves any more. The belief might not be caused by groupthink, but it is sustained by it.
* Regular readers know full well that I don't mean this to utterly dismiss the possibility of supernatural deities or even alien visitations. I'm gonna assume y'all intelligent enough to get that.
Of course, it's not impossible to break groupthink, it's just very difficult* (at least if we insist on maintaining the same group dynamic, but more on that later). Individuals still do have those other judgement calls they can make, but it takes a huge amount of effort. This goes some way towards explaining how societies have come to accept radically different things as "normal" over the course of their histories.
* You'll always get some fraction of the population who are prepared to believe anything in any circumstances, regardless of how rational it is. This may be the result of the sheer complexity of the brain, and/or it may even be an evolutionary safety measure designed to prevent an excess of groupthink. And of course individuals will always weight the contributions of ideology/observation/trust/etc. differently to each other.
As far as arguing with people on the internet goes, we should remember that people primarily favour local information. If you tell them that 97% of scientists don't believe in UFOs, but 97% of their friends do, then they'll believe their friends. Which is in some ways very rational. Rather than believing the claims of people they've never interacted with, they're accepting the claims of people they have direct access to and already trust. You, a stranger could be lying, but they "know" with certainty what they're friends think. From their perspective, so many trusted people supporting the same conclusion is extremely powerful evidence that it can be taken as fact, and so dissenting views become (as we've seen) merely evidence that someone is biased or stupid or ignorant or whatever.
The problem is that direct experience has a fundamentally stronger affect on the brain than stating intellectual facts : people might well accept that most others don't share their views, but they know full well, on a very deep level, that most of their trusted friends do. So while they may consciously know about how unusual they are, they may not, unconsciously, believe it. Their implicit bias is that they are normal and everyone else is a weirdo. For them to become truly aware of how strange they are, they may have to actually experience the outside world, not just be told about it. Otherwise their limited interactions will have the same result as if most of humanity didn't exist. Once again, the differences between knowledge, belief and behaviour become very important.
Hence, establishing more connections between people has incredibly complex consequences : it can both encourage and suppress ideas, make some people more open but force others into shells. I don't think this is fully understood, but my guess is that for some, knowing that they're in a minority has the appeal of making them feel special, whilst feeling that everyone else around them thinks as they do emboldens their belief. Learning contradictory information can sometimes cause delight, but not usually if it goes against something we have a strong emotional stake in - even for scientists.
And moreover, if you tell someone in such a situation not to believe in UFOs you're not merely asking them to give up a single cherished belief, which would be bad enough. You're also asking them to admit that their trust in a large number of people has been mistaken, that their whole basis of evaluating information is flawed, that nothing they thought they could trust is correct. You're inevitably not just talking about the issue, but their faith in and friendships with other people and their own capacity for rationality. If you tell them that all of that is wrong, well, no wonder it rarely works. And maybe it also explains why we get so passionate about issues with little or no real evidence, e.g. religion : we don't have any facts to argue over, so instead what we're doing is shadow boxing against less tangible things like trust and ideologies. Those are far more potent ways to summon up the blood than arguing about statistical significance or reliability of evidence.
Of course, it doesn't necessarily follow that a strong consensus is only ever due to groupthink (though it often is). After all, UFOs aren't flying saucers and the Earth is round, not flat. But it's clearly possible for correct conclusions to be sustained in part by this mechanism. And while it can be very strong, it is not unbreakable. It's just that the mechanism needed to break it up is probably not mere persuasion, as we'll get back to shortly.
The final point I want to make is that it's often said (quite erroneously) that millions of people can't be wrong, while such people also tend to ask in all sincerity the rhetorical question, "well would you jump off a cliff if everyone else was doing it ?". The simulations described thus far are partly an attempt to illustrate that crowds can be both wiser and stupider than the sum of their parts : it depends strongly on both the people in the network but also the structure of the network itself. Which is why, perhaps, the competitive collaborations of modern science that I'm always banging on about should give us tremendous confidence in science as a whole, even though any individual institution can still fall victim to groupthink. The idea that established "dogma" always falls to plucky outsiders is pure nonsense.
Group decision making in action. |
At the most fundamental level, there's no difference between information people tell us and our direct observations; there is no foolproof guide to objectivity, only assumptions we can make which are relatively good or bad. So to the brain, in the right conditions, opinions can really be as good as facts. If all your friends jumped off a cliff, then chances are there'd be
All of the possible ways to control information discussed so far appear to have major problems. Changing people's methodological reasoning is possible, but in practise only at a young age so would have no effect on the older generations (the ones who actually run things). Persuading people that ideas are wrong is possible, but very difficult to do on a large scale because the networks that lead to and sustain their ideas still exist. Major social reform is inherently difficult. But are there at least options which could be deployed in accepted theatres of debate that would be effective ? I believe there are.
Managing the flow
Unfortunately trees don't change species depending on their environment, but they do undergo dramatic changes. |
If the basic assumptions discussed so far are correct, then this should have a profound effect. In a new environment, people are forced to make entirely new connections and are praised and (socially) punished for entirely different actions. If that environment favours different views to what they were used to, we can expect at least some of their ideas to change (this is in part how rehabilitation works, after all, though obviously there are some major caveats to that). Not all though, because strongly contrasting viewpoints can persist in even extremely hostile environments. Indeed we could expect some individuals to hold even more strongly to some of their beliefs. But most, the theory suggests, ought to change many of their ideas as they become integrated into the new setting. It's those who don't integrate (which could be the fault of themselves or others) who we might expect to go the opposite way, who perhaps hold those ideas as such a core part of their identity that to relinquish them would be an act of genuine psychological self-harm.
What this absolutely definitely should not be interpreted as is the cruder notion that everyone hates each other because no-one's really listening to each other, and if we could only start listening then everything would be fine. It wouldn't. Hatred can arise precisely because we've been listening, because we've heard the other side's moral views and find them disgusting. Just throwing a bunch of diehard vegans together with a bunch of fox hunting enthusiasts is hardly likely to result in anything other than a bloodbath. So we shouldn't be the least bit surprised to learn that getting people of different political allegiances to communicate more directly on social media leads to increased polarisation, not less.
Flow rate is another related factor. We know that repetition has the advantage of cognitive ease. But as with the other parameters, flow rate may backfire above a certain threshold - if someone never shuts up about the same boring issue, we stop listening, and if it doesn't actually affect our own belief directly, we may well see them as biased. Too much confirmation can also be a sign of groupthink. So the ~30% threshold mentioned earlier might be true for the number of different sources telling us the same thing, rather than the amount of information on one topic coming from a single source. Yes, we can succeed in changing people's minds if we make more connections and give them enough different information that contradicts their view, but no, we definitely can't do this just by whacking 'em together or bombarding them with different arguments. And that's not even accounting for trust and all the other issues that affect persuasion between individual sources.
There's no doubt that echo chambers and epistemic bubbles can lead to polarisation, but there's more to it than that. You can't break such systems just by adding more connections any old how - that's stupid. Information flow rate has to be high enough to cause persuasion but not so high that it backfires, so adding more connections willy-nilly is by no means likely to get the flow rate just right. And remember the threshold-like properties of beliefs, that techniques that strengthen existing beliefs are different from the ones required to change stance. Using methods that reinforce group allegiance isn't necessarily a good way to gain new members (which is where most political activism seems to commit a grievous error).
While slightly more sophisticated variants of "wake up sheeple !" may help break people of their existing group identities, it turns out that the best way to get people on board isn't to avoid group mentality (humans are social whether you like it or not), but to exploit it : make them part of the group you want them to participate in. Again, you can't do this any old how. According to psychologists, you should bring people together who are of roughly equal social status, on an equal and fair footing, make them interdependent, encouraging empathy, and most importantly of all, giving them a common goal - ideally a common enemy or other threat.
Overcoming strong polarisation is undeniably difficult at the best of times, and one might think that discussing politics on social media is about as hopeless as getting Brian Blessed to teach a class of children who are frightened by loud noises. As one researcher put it, "politics makes us mean and dumb", which is hardly good news for rational debate. And yet, rather impressively, there have been experiments in which social media was used to get people to genuinely, successfully work together on a politically-charged issue.
How ? By simply avoiding any labels of political affiliation. Just as emotion-driven, identity-driven rhetoric can drive us apart and actually make us stupider, so evidence-driven, identity-free discussion can help us cooperate (and perhaps having a common task in that example also played a role). Well, maybe. I for one would like a lot more studies into the conditions that make us more rational, rather than concentrating on the ones that make us behave like faeces-hurling chimps as seems to be the standard.
Regardless of the precise circumstances for rational behaviour, it's safe to say that this is not easy in reality. Worse, the people in most need of reform are those who are also the more annoying to actually talk to and difficult to reach. Even negative emotions have an appeal, a physically different effect on the brain. But it does suggest that polarisation can be broken, perhaps through incremental steps, discussing common ground wherever it exists and avoiding the (identity) politics that makes us behave so irrationally. To have more sensible political discussions, it turns out the first step might be to stop talking about politics for a while.
Still, it does seem that building more connections is in a principle a good way to prevent the flow of ideas. This has the added benefit of an ideological appeal, that you should fight speech with speech* rather than silencing your critics and presuming that you're right and they're wrong. What the network approaches reveal, though, is that it's not just the counter-ideas or even the people saying them that matter, but the whole structure of the network in which people are embedded. So alas, merely trying to build bridges with people and avoid the difficult topics until you've established trust may not be enough - at least, not if we want to persuade large numbers of people. That doesn't mean we shouldn't do it, just that if we want to have an effect on society we have to take a different approach. And simulations suggest there might be something we can aim for.
* I discuss the ethics of this here under the section "false symmetry".
Two few connections and ideas are stifled. Too many and they become too resilient and thinking becomes rigid. But just the right amount and setup gives you a small world network, optimised for the flow of ideas as complex contagions rather than simple viral propagation.
But hang on, didn't we set out to stop the flow of information, not aid it ? Well, yeah. The small world network is only of use to us if we employ all the other techniques discussed last time. It's just that in that situation we have a much better chance of our ideas actually influencing people : if we don't like something, we have a much fairer chance of stopping it than in the groupthink echo chamber situation. This setup should ensure that groupthink never even happens, so we shouldn't have to deal with rabid levels of hyperpartisan polarisation in the first place. Rearranging existing networks into such a configuration (so long as we remember the stance-changing persuasive techniques) might give us a much better chance of overcoming ideas previously thought to be intractable.
Isn't this all a bit complicated though ? Wouldn't it be much simpler to just sever the connections like in the left panel, preventing information from flowing at all rather than carefully managing it ? Chopping down an entire tree is hard work, but cutting off a branch or two is much easier.
Cut off the seeding branches
If we can't stop simple, obvious ideas or basic facts, what about more complex, less popular ideas ? We know that changing people's interpretive methods and/or ideological stance can work, but that's hard to do successfully. We also know the spread of information in general is highly complex, including the strength and stance of belief, trust, curiosity, flow rate per source, cognitive ease with which the idea is described, type of communication channels, number and type of connections, number and type of independent connected sources, and probably more besides. We also know that feedback between ideas and people will be highly complex, with acceptance of some ideas making it easier or harder for them to accept others, and trust in the source being linked to the ideas they're spreading.
This is not an easy problem. However, it might not be hopelessly complicated either.
In the first post we saw how curiosity and flow rate could work against each other. To an extent, restricting the flow rate of an idea induces greater curiosity because we naturally want things which are rarer. This provides us with a motivation to seek out the forbidden knowledge... but only to a point. Beyond a certain difficulty level, i.e. if the flow rate is sufficiently low, we decide that the reward isn't worth the effort and it becomes undesirable. We've also seen that there's a big difference between knowledge and belief : when we do uncover the hidden information, the fact that it was hidden doesn't necessarily make us more susceptible to believing it.
And even if we resent whoever enacted the restriction, that doesn't mean we'll likely to believe whatever they're hiding either. We might not even have the wit to understand it, much less accept it - like digging for buried treasure only to find a document entitled Quantum Electrodynamics : Theoretical Applications in N-Dimensional Feline Space With Extra Difficult Equations : Volume V.
No-platforming efforts appear to be generally successful though. They attract a bit of attention but quickly become non-stories. Far from fanning a Streisand effect, it seems that giving these people too much airtime is a far better way to disseminate and normalise their views. Or, what's more likely to maximise flow : giving them a prime-time slot on a major T.V. channel, or not giving them a prime-time slot on a major T.V. channel ? Obviously it's the first one, because shouting from a soapbox just can't compete with arguing with David Dimbleby in front of millions of people*. Whether or not they'll be successful on T.V. depends on all the factors we've discussed here and in part two. Sometimes putting lunatics on television just exposes them as lunatics and makes them more despised rather than winning them support; exposure can, in the right conditions, actually quench an idea rather than fan the flames.
* Provided, that is, that no other comparable outlet replaces them. If that happens then there probably will be a strong Streisand effect - unity of behaviour is important.
It may be proverbial, but shouting from the rooftops isn't all that effective compared to ranting on Twitter. |
The Streisand effect doesn't seem to happen with no-platforming because we already know the sort of thing people are going to say and there are usually multiple ways to access them anyway. But with ideas and pure information, that's different. Curiosity is likely to be inflamed : what is this forbidden fruit I must not eat ? What delights am I missing ? Any attempts at an outright ban had better be pretty thorough if curiosity isn't to overwhelm the low flow rate, and in the modern internet era even small leaks have the prospect of leading to a torrential flood.
As we've already seen, superinjunctions are surely the worst kind of ban. They forbid discussion about even the existence of the ban on a particular issue. In effect, they might as well be a locked treasure chest with the words TOP SECRET INFORMATION plastered on the front in big red letters. They not only incite curiosity, but actively and compellingly encourage belief in whatever's hidden because they only ban a very narrow topic, like saying "don't think about elephants". Or, as from last time, "you're not allowed to talk about
* Please don't.
Not necessarily, of course. But that does seem to be a default, natural reaction for an awful lot of people, and having learned such a titbit, they're not likely to keep it to themselves. This is far different from the case where a source just chooses never to mention something. They could be accused of bias, but it's far more difficult to whip up a scandal for bias than for suppression.
This is all true as far as it goes. But we should remember that hardly any ideas actually do go viral. Curiosity about what an idea is does not equate to promotion of an idea. For example, moral philosophy lectures can essentially grapple with any topic under the sun without anyone worrying that students will be radicalised. Fundamentally it makes no sense at all to prevent all kinds of discussion on any idea - rather, we're only talking about the active promotion of certain ideas, attempts to win converts, indoctrination and the like. Tell me that a group promoting sales of Teleltubby merchandise was banned and I'll have a celebratory cup of tea, not be so outraged by the ban that I'll start a protest group campaigning for JUSTICE FOR PO. Banning medical textbooks which have wrong numerical quantities is unquestionably morally good; banning anyone from ever discussing those values is patently stupid.
So even simple factual statements are fraught with difficulty. But they reveal that sating curiosity can reduce spread. And for more complex ideas, this may be possible without giving the game away : you just need to give the gist of an idea. For example if I'm told a banned document contains the lyrics of all the Teletubbies greatest hits, then my curiosity is already way, way, way more than satisfied, because a little knowledge is quite enough, thank you.
There are many things to miss about the 1990's but these aren't one of them. |
Subjective concepts, on the other hand, are much more complicated. Facts are neither fringe nor mainstream, but their interpretations fall on a broader and much more complex spectrum. An idea held by few people may be harder to spread because its low acceptance rate causes us to label it as fringe, so anyone believing it is weird, and we're clearly not weird so we shouldn't believe that. Whereas if lots of people believe it, well, that's mainstream, socially acceptable, so there's less of a psychological barrier to acceptance. So while techniques that strengthen belief - praise and shame - might not be able to change someone's stance, they might be able to maintain it by keeping everyone in agreement. Sustaining hearts and minds, if you like.
The observational figures suggest that if you can keep acceptance down to ~10% of the population, it will be seen as fringe and disbelief becomes somewhat self-sustaining. It isn't necessary to suppress an idea entirely in order to control it. Or, to modify a famous quote :
You can fool some of the people all of the time, and all of the people some of the time. You can't fool all of the people all of the time, but that doesn't matter because you don't need to. You just need to fool most of the people most of the time. And that's enough, because then they go on to fool themselves.Remember the local flow rate though : it's everyone we know and trust, not statistics of the whole population, that dominates what we subconsciously perceive as mainstream. Groupthink and filter bubbles are still a thing. And, as we've seen, while ideas can persist even in extremely hostile environments, those environments are definitely not good places for them to spread. Walk into a Welsh pub and declare in a loud, confident voice that the English are better at rugby and... well, you'll probably survive. You may even be let back in to the pub again. But what you definitely won't do is win anyone over, no matter your prowess as an orator or how much people like you.
The fact that everyone will try and shut you up won't win you any kind of Streisand or backfire effect. You aren't going to win mass converts because most people hate the English rugby team - you'll be, at best, just the local weirdo. Although in this case there will be other effects external to the local network that keep your crazy idea seen as crazy, the point is that fringe ideas don't inevitably spread because of attempts to suppress them - sometimes, those attempts just succeed. Banning people from promoting the Flat Earth wouldn't drive more people to believe in it, because the ban would and could not by itself induce the idiocy needed to believe something so hilariously stupid.
One last thing that should be mentioned : restrictions on complex, esoteric knowledge are routine. Manufacturing processes and essential formulae have been kept secret for decades, despite the curiosity invoked and the money-making potential. Like the other successful cases, these don't totally forbid discussion - they just limit the context, i.e. who can discuss it. The sheer difficulty of obtaining (and in many cases understanding) the secrets of N-Dimensional Feline Space or whatever, coupled with the known punishments enacted for breaking the restrictions, often succeeds in suppressing highly valuable information - even if it would be strongly in the public interest to reveal it. The key here appears to be the flow rate, i.e. few people actually need to know the complete formula for Coca-Cola, whereas in contrast conspiracy theories of the more "internet nutjob" variety are ludicrous because of the sheer number of people they require to keep a secret. And often, curiosity is reduced because we have access (or know we'll have access eventually) to the main results, e.g. I can walk down the street and buy a Coke at an affordable price, whereas obtaining the formula would require years of training (both to become a chemist and to train as an awesome ninja spy) and runs the risk of enormous lawsuits.
All this suggests that the absolute ban of an idea is preposterous. But a ban specifically of the promotion of ideas, that's much more plausible. If a subjective idea is promoted by very few sources, and the ideological climate is poor, then it can indeed be stifled. We can effect some limited control over society. Not all that much, mind you, and nor would such control even be desirable. But some. Perhaps enough, the hope is, to cull the worst excesses without stifling free inquiry and criticism. Indeed, rather than eliciting a slippery slope, a successful control has to recognise that forbidding promotion may not succeed at all if general discussion is also forbidden.
Chopping off the branches : part 2
Banning the promotion of ideas does seem plausible then, at least in the right circumstances. The only remaining question is therefore whether we should - in terms of effectiveness - also or instead ban specific people and other informational outlets. This approach would be far more of a scalpel than a blunt instrument.
Well, the obvious answer from what we've seen so far is very much yes. If ideas largely originate from a single source, then how successfully they spread is critically dependent on how many people that one source can reach. And certain platforms are vastly more effective at generating large audiences. If you target a specific person or media entity and say, "you're not allowed on Facebook any more", and if Facebook is their only outlet and if no-one else is saying the same thing, then you can reasonably expect success. You might make their existing followers more devoted, but even this may only be true of the real hardcore. Furthermore, if the news outlets are exploiting cognitive ease* through repetition, this will stop. By and large, you will effectively have performed what we discussed earlier : a transplant. You've left their followers with no choice but to seek out other information sources which - by necessity - are saying different things. And that might - might ! - just break their groupthink.
* Or at least something analogous, as we'll see in a moment.
It's not quite that simple, of course. It will only work that well if those rather specific conditions are met. But those conditions might not be all that unusual. It turns out that, on the internet at least, real news tends to spread from multiple, independent, centralised sources, whereas fake news - the genuinely batshit crazy stuff - gets most of its hits via reshares from person to person. Which makes intuitive sense : everyone's got access to the mainstream media, so everyone's viewing those sources independently of each other, whereas fake news inherently relies on going viral because it has far fewer major outlets. As well it should : things which actually occur can be seen by everyone, whereas lies tend to be unique to whoever came up with them.
Spread of real (left) and fake (right) news on twitter. What's not shown - for simplicity - is that in the case of mainstream media there are several different central sources all saying roughly the same thing, though possibly to different audiences. Of course, the spread pattern itself isn't solid proof of authenticity - I bet blog posts and other social media posts behave in a similar way to the one on the right, and I'll hear remind the reader that some blogs are really very good indeed. Obviously. The few blog posts I have with substantial numbers of hits have been ones picked up by larger distributors. |
Which gave me the weird idea that maybe InfoWars has something in common with observational astronomy. After all, science-class telescopes are hard to access, so the community of users is very small, and some observations can take so long to perform that they're never repeated. So as with fake news, there can indeed be only a single source generating unique information. But then, major political figures are also single-point information sources, and like astronomy press releases, statements by politicians tend to reach a whole variety of major outlets. So again, the spread of basically truthful information (I mean in the sense of reporting what people said and did, not necessarily if what they said was actually true) is going to be very different from the spread of lies. That doesn't mean that lies can't spread more effectively when they take hold, just that they do so a bit differently.
Creating something that goes viral, i.e. has appeal to psychological universals, isn't easy. But it's undoubtedly easier to construct a lie that does this than discover a a truth with the same properties, which is why lies often spread faster than truth on social media. It's nothing to do with the "truthiness" per se so much as it is their novelty and appeal. But even while lies can be spread more easily, their unique, individual nature means they may actually be easier to suppress so long as that is done at source. Fighting them after they've spread is a massively harder task precisely because they've been designed to be convincing, in way that's seldom possible with the truth.
A truth that spread virally, with universal appeal and easily understood. |
* Probably only in a limited way though, since humans do seem to share some hard-wired tendencies.
** Because we'd be embarking on a campaign of bloody vengeance.
But there are important differences too. A common objection is that if you ban fake news, you lose the ability to track who may be falling victim to it - it goes underground. However, whereas if you banned some kinds of research some of it most definitely would do just that (the kind that doesn't require access to major technological facilities at least), that's not how fake news works. It explicitly relies on reshares to propagate, it needs as much exposure as possible. Going underground wouldn't help it in the slightest, and there's a very good reason for that. The thing is, most fake news isn't driven by a belief in "alternative facts" at all; its goal* is not a misguided attempt at enlightenment in the way that people who genuinely believe in Bigfoot try to do. Rather, it is an attempt to confuse and sow mistrust - not to convince the viewers that anything in particular is true so much as it is to persuade them that they can't trust particular sources (or worse, that they can't trust particular claims and therefore any source making them). It aims to replace dispassionate facts, those pesky, emotionless bits of data that can't be bent, with emotion-driven ideology, which can. It thrives on the very polarisation that it's designed to promote, as well as the erosion of both critical and analytical thinking that it exacerbates. Removing it is likely, in the long term, to do far more good than harm.
* For a really weird counter-example, one which is so weird I'm skeptical of the claim, see this.
...provided, that is, that it's removed wholesale, not piecemeal. Again, a single platform banning it may only drive traffic flow to the others, strengthen existing belief, and excite the curiosity of the previously uninterested. By contrast if such content is kept very scarce indeed, then its fringe status becomes self-maintaining. Beyond that threshold, the rarer it is, the lower the flow rate and the fewer people it can ensnare.
For the sake of cognitive ease, I repeat : fake news doesn't (just) happen because some nutter is genuinely convinced of the healing power of dandruff or whatnot. It's deliberate propaganda, but persuasion is not its main objective. Only a very small fraction of viewers will actually be convinced by any particular article, but their small numbers don't matter. Those unfortunate few will become vocal, prominent figures, so instead of having to debate whether we should increase taxes we end up discussing whether the Loch Ness Monster was a Nazi. Yay. The most effect this has regarding actual persuasion is that by positing absurd, extreme views as legitimate, it may shift the Overton window. This exploits the relative nature of our default comparisons : if people start to apparently believe in Nazi lake monsters, then discussing whether we should make Mexico build a wall looks practically sensible.
But the main point is quite different : if you can't convince, then confuse. If you get people to doubt everything, then they believe nothing. Exposure to a little bit of fake news does no-one any harm, and might even make them more skeptical and cautious (this is the basis of the "inoculation" approach to combating it). Again though, everything has a threshold. Too much misinformation doesn't make people super-duper critical thinkers at all - rather, they end up basing everything on ideology and emotion. That's why such tactics can get away with almost comical levels of farcical claims, because as long as they reach enough people, they're bound to fool some of them. It's much like why spam emails are so badly-worded : they deliberately try and avoid skeptics in order to reach the more gullible.
Note that this is quite unlike the tactics of ordinary politicians we're more familiar with - they want to actually persuade us, and know that saying enormous amounts of unconvincing lies won't work. Consistency is a prerequisite to persuasion. The goal with internet-based fake news, on the other hand, is saturation bombing, to try and overwhelm the flow of credible information at least for whoever might be vulnerable (again, the relevant flow of information is local). It doesn't have to reach everybody to cause wider problems. It doesn't even matter if a fake news source is self-contradictory. The fake news outlet can keep going so long as it has sufficient resources - a politician trying the same thing will, unless things have gone horribly, horribly wrong, be voted out of office, or at least remain a minor figure on the sidelines.
Unless things have gone horribly, horribly wrong, that is.
As far those who aren't so convinced, they don't escape unscathed either. Knowledge of what's being done means they can't be certain if someone is arguing sincerely or deliberately trolling. This explains why anyone would bother, for instance, writing negative comments about the latest Star Wars movie. It's low-hanging fruit, easy to play on emotions, and exposure of what's being done only makes people trust the internet even less.
And there's another important subtlety. If you're not convinced by fake news, you might still believe that other social groups are. What it's doing here is handing people ready-made straw men : arguments which are much more absurd than anything that someone really believes (yes, some people believe in higher taxation; no, no-one believes the rich should be hunted down and eaten*). The fact that they're easy to debunk actually becomes a very powerful asset. Instead of discussing the details of opposing but equally sophisticated (and boring) fiscal policies, we end up seeing the other side as believing in exciting but incredibly dumb things like flying squirrel monsters from Uranus, or whatever. It wrecks the credibility of the other side and makes them appear far, far stupider than most of them really are. The other side become self-evidently pantomime villains and therefore anyone agreeing with them is obviously stupid... again, as we've seen previously, this is a route to crude, absolute thinking, the polarisation the fake news creates also driving its own spread.
* Right... ?
It doesn't always work, of course. It doesn't have to. It just has to do some damage. Fake news isn't about encouraging rational debate, it's about shutting it down.
The Exceptions That Prove For The Rule, Or At Least Provide Interesting Evidence For It
Hopefully it's apparent that from all this that not all bans will work in all situations - such a degree of control is impossible and undesirable. Cutting connections can, sometimes, actually lead to a more linear, viral flow of information. The network structure of followers as well as instigators is important, as is the strength of their induced belief - dealing with a bunch of still-devout acolytes will be different from dealing with the more casual sort; if we ban a particular outlet for saying certain things but fail to ban any replacement that emerges, well, that clearly won't work. And if we cut the connections between groupthinking collectives, but don't do anything about their internal connections, we'll still be left with bunches of groupthinking morons, floating around in a void of blissful stupidity.
Woo bloody hoo. |
In some situations, especially where multiple sources are saying the same thing, adding more connections - fighting speech with speech - may indeed be a much better option than regulation. Here's a particularly nice counter-example : revenge. Violence, like ideas, can arise due to people making independent observations and concluding that they must act violently for a host of reasons, some much better than others. But it can also happen in response to violence. What research in social work has found is that if someone shows signs of potentially responding violently due to provocation, they can be prevented by "interrupters", who essentially talk them out of. This isn't easy - it requires trust and credibility in the eyes of the would-be criminals, careful timing, and large numbers of social workers. It's not perfect either. But it does work.
For all that belief is not behaviour, the similarities to the spread of ideas are striking. By adding in more connections - not just any connections, but the right type of connections at the right time - they overwhelm the belief that violence is necessary. Regulation to prevent violence already exists, but this approach of adding more connections, of treating it like an epidemic instead of being solely due to personal faults, is more successful.
We might be able to apply similar tactics to ideas. While fake news dies if it goes underground, other ideas do not. So in some cases, the ability to monitor could indeed be an advantage - provided we don't limit ourselves to monitoring. If we detect someone going over the edge of crazy and successfully intervene, they'll be in a much better position than anyone else to disrupt their former groupthink bubble.
While this case does illustrate that sometimes the answer can be more speech rather than less*, it also reveals something else. Sometimes people claim that things like fake news or certain so-called politicians are symptoms, not causes. We see here that just like with a disease, sometimes a symptom can also itself be a direct cause. After all, the very reason viruses give us symptoms is to spread themselves around. If you had a perfect cure for the common cold but couldn't give it to everyone at once, you'd never eliminate the cold virus because it would continue to re-infect people. Similarly, even if you could devise a perfect way to disinfect victims of propaganda, you'd still have to stop it in order to both prevent its continued spread and its damaging effect that makes victims harder to treat. Cures, treatments and vaccinations are different and as such they must be applied differently.
* Interestingly, shouting at people so loudly they give up trying to speak can sometimes be more successful that carefully reasoning with them.
It's a bit like how social prominence can be self-sustaining, how behaviour sometimes causes belief rather than the other way around. Why is the Queen so popular ? Largely because the media chose to make her popular. Once you reach a certain status level, the authority and credibility that bestows becomes very hard to erode. Popularity becomes a symptom and cause of itself. So whether a cause is proximate or ultimate, you still have to remove it.
Otherwise known as "famous for being famous" syndrome. |
Let's take drugs. Wait ! I mean, let's consider drugs, obviously. It's an open secret in the Western world that cannabis is basically harmless. Hence, responding to cannabis users in the same way as heroin addicts seems to be causing a slow but gathering backlash; there's less of a moral leap from an illegal but harmless activity to the seedy criminal underworld. Recently, the U.K. banned substances that were previously legal : this appears to have resulted in a drop in use, but now users are harder to monitor because the business is off the grid. Portugal, on the other hand, experienced a significant reduction in drug-related crime through less restrictive, more progressive policies. More draconian anti-gun policies, on the other hand, are overwhelmingly successful.
Clearly, the fact that over-regulation can indeed backfire does not mean we should have no regulation at all. Thresholds again : the effect of a totalitarian ban on discussion of any kind is likely to be utterly different to lighter restrictions; relative comparisons also mean that we'll always perceive problems even if things have profoundly improved. Anyone expecting a perfect solution is is quite dangerously mad, and simply doesn't understand people at all.
We also have to recognise that each system has connections and structures that are unique to itself, so that the kind of behaviour we might (have to) tolerate on the internet might be different from what we might be prepared to accept over dinner in a fancy Parisian restaurant. Social media is a very different beast from traditional communication methods : it allows us to meet new people on an international scale, hold group or individual discussions, keep our thoughts private or public, edit what we said retroactively, select what we listen to and what we filter out (in many cases there are also filtering algorithms at work), communicate with both images, text, video and emoticons, and choose if and how what we share gets propagated further. It is not remotely like email or phone calls or television or writing letters or town hall meetings. It's something different. It offers the huge complexity of meeting new people in a completely different way than we're used to, with radically different communication tools combined with the ability to get organised.
Some of the problems of the internet might well stem from its increasing centralisation. Yet the bygone era of geocities and WinMX chatrooms wasn't some Golden Age of Freedom - there were just as many crazies around back then as there are now. This has led to the suggestion that the crazier aspects of human nature are just that, a part of our nature, the ideological/analytical "soil", if you will. You'll always get some radically crazy ideas popping up, no matter what you do. They only seem more prevalent, perhaps, because social media has been a much more effective tool for them to get organised. Which, I suggest, doesn't actually assist the real crazies all that much (because they're too busy wiping the drool off their keyboards) but it does for those who have the kind of appealing-but-wrong ideas that ultimately prove so destructive. Those ideas may indeed be genuinely fostered by the internet.
Not Just The Thought Police
I need scarcely remind readers that criticism of an idea - "I hate this, here's why" - is not even remotely similar to saying, "ban all discussion of this". |
My final point is that there are different approaches to regulation. Even within the strictest approach of legislated bans, we've seen that it would be pretty mad to try and ban all forms of discussion. It's the successful promotion of ideas we're trying to prevent, and we can't very well do that if we prevent people from explaining why they're bad ideas in the first place (the backfire effect notwithstanding). Restrictions on raw factual information are occasionally possible and necessary, but rarely (if ever !) desirable if we're trying to shape thinking in a positive way that promotes free inquiry and critical discussion. No, in this context it's only the promotion of subjective interpretations where there's any real prospect or desire for success. Granted, "promotion" is itself subjective and that's going to make some people uncomfortable. But really it's the same with enormous areas of legal judgements. We don't have a perfectly objective basis for determining reality, so the question, "who gets to decide ?" isn't actually a sensible issue at all - outside the philosophy class, at any rate.
Yet regulation doesn't have to be as blunt* as that. The UK's laws, which are currently still moderated by the European Convention on Human Rights, are complicated to say the least - they go a step beyond the subjective idea of "promotion" to include the even harder to judge "intent". Perhaps more interesting are the provisions in the broadcasting code (different rules apply - or more usually don't apply - to newspapers, just to make things even more confusing) that cover due impartiality, accuracy and representation. Representation means we get, for example, people of diametrically opposed political views appearing on the same media outlets. Sure, T.V. and radio channels can have biases**, but they aren't hyper-polarised. There's no equivalent of Fox "News" on UK TV (look instead to the tabloids for that).
* I've decided to completely omit the issue of punishments, of which many options are possible.
** And even host diametrically opposed hosts, such as LBC radio which features both James O'Brien (an incredibly angry yet perpetually bored hardline Remainer) and Nigel Farage (a toad that speaks).
An even more subtle approach would be not to regulate what people say, but the business model of media outlets. Ideologies are one thing, but goals and incentives are another. Organising these systems thus need not even raise the moral dilemmas of particular topics, people, or speech regulations in general : for example, discussing whether institutions should be public or private, their oversight, taxation, the number of outlets an individual, company or other agency can own, the amount and type of advertising revenue they're allowed. (for why this matters, see "clickbait"). There are, I think, cases where we can rationally say, "the promotion of this material should be prohibited", but in general, regulating specific topics and people is something to discourage if not avoid. After all, as we've seen, the most effective way to prevent an idea from spreading is that no-one ever raises it at all.
Summary and Conclusions
It's horribly complicated. Please don't go away thinking that I'm fully convinced of any of this. Rather, think of it as an attempt at an investigation - some of it will be wrong, but hopefully some of it will be right as well. As it stands, we've seen that yes, there are some cases where regulation is a terrible idea and others where it merely stems the tide. But in some particular cases, allowing ideas free reign is far more likely to attract widespread support than any amount of restrictions.
In the first post I tried to generalise the conditions for successful persuasion by looking at the extreme cases. Let's see if we can do the same for restricting the flow of information. According to what we think we know, it follows that a ban will be in general effective if :
- The information is subjective, overtly promotional, already disliked by a large majority of people, hard to understand, difficult to arrive at independently, appears to contradict other knowledge, and doesn't excite curiosity (especially if the gist of it is well-known and only specific details are lacking).
- The ban is applied uniformly to (or better yet by) all media outlets and not just one or two, the resulting inaccessibility of the information is sufficient to be discouraging rather than challenging, the lack of reporting itself goes unreported, the restrictions are not so harsh that they cause a dislike of whoever instigated the ban (e.g. tolerating minor infractions and only enforcing more flagrant violations), corrective measures are used to persuade those who still believe in the idea which account for their personal situations, and any attempts are replacing the banned information or source are dealt with in the same way.
Or we can look at it the other way around, and have a bash at finding the conditions under which restrictions will fail and even make things worse :
- The information is factual, neutral, popular, easy to understand and formulate independently, fits well with other established knowledge,and excites curiosity.
- The ban is applied haphazardly to different people and media outlets, doesn't make the information significantly harder to obtain, is actively reported, has harsh (and/or inconsistent) punishments for offenders, no effort is made to deal with people already convinced, and any attempts to say the same idea in a different way are simply ignored.
We've seen that there is only one way to deal with supposedly-obvious ideas that a large fraction of people can form of their own accord : education can at least partly shape our default interpretations. But this we know is very hard to do and, schooling being limited to the young, won't affect everybody. We need to remember a lesson from last time - it matters who we are. We need to recognise that this is a network problem, that our actions as private individuals can have very different effects to those of the network managers. For example, even if we account for everything in part two and make the best, kindest effort at persuasion that we can, it may still backfire because our lone voice simply cannot match the many other trusted information sources assailing our interlocutor.
This doesn't mean we should throw up our hands in despair. It doesn't have to be like this...
... or like this :
... instead, it can be like this :
As individuals, we can and should learn about persuasive techniques - after all, most debates we have in everyday life don't feature crazy lunatics rambling on about killer cheese or gay frogs. But alas, the days when total bollocks was safely locked away in obscure internet chatrooms are over. The internet crazies have been well and truly unleashed, so we have to know how to deal with them... but also, perhaps more importantly, the much more numerous kind of people who agree with their sentiments rather than their statements. There's little enough point in trying to reason with the inherently malevolent, darker side of humanity, but we have to work with the more casual and much more numerous followers.
We can also see how the success of those techniques is strongly context-dependent. Ideas can be kept fringe or mainstream through a combination of praise and shame just as much as they can through reasoned argument, and then the level of support for an idea is somewhat self-sustaining. But when someone does change their mind, those same techniques can fail : arguments which win hearts don't always change minds. It's not that we should never use careful arguments or really stupid memes to convince people, it's that their success depends on both the individual and the network. It's easy to whip up emotive rhetoric*, but overall, it's probably much better to be seen as fair, impartial, and well-intentioned. That way, when you do need to strongly condemn something, your established trust plays a huge role in winning support from those you would otherwise alienate. Even (or especially) genuine truth-seeking debates are about much more than the raw ideas themselves.
* See all of twitter and its feckin' stupid outbursts whenever any celebrity does something even very mildly provocative.
This doesn't mean we have to give group hugs to Nazis or start reading The Communist Manifesto in shady bars, nor should we avoid criticising ideas we find abhorrent or even avoid venting anger through rants and mockery; again, criticism is not regulation. Indeed, if we try and completely suppress our feelings the long-term results will only be much worse for everyone. What we should try and do is avoid only ever using mockery and derision in place of thoughtful, considered arguments, using nothing but memes to deal with ideas that are actually rather complex. Going on the occasional expletive-laden rant does one the world of good (and we should recognise that some people doing this have actually thought about the issues very deeply, even the coolest heads need to blow off steam from time to time); in contrast, never exploring things in a more measured way is self-defeating.
You think that liberals/conservatives/firemen/ostrich farmers/fly fishing enthusiasts [delete as appropriate] are so awful that there's no hope trying to reason with them ? Maybe you're right; as we saw last time, their beliefs may have nothing much to do with evidence or logic, so your counter-arguments shouldn't be expressed using evidence or logic either. If you can't bring yourself to do this, or just can't figure out how to do it effectively, there's something else you can try instead of shouting at them : don't argue with them at all. Instead, try and examine their crazy ideas in as cool and calm a way as you can. Think of the most rational reason you can why the believe what they do, the most irrational reasons why you believe what you do, and ways of undermining your own and their beliefs. Don't stop venting or posting memes, but for God's sake occasionally do a more thoughtful analysis as well.
And write them down. Especially if you're an expert and you see some wrong-headed idea has taken root : don't keep it to yourself, post it somewhere people can read it. Right or wrong, change is often a slow, messy, painful process. But despite the risks of backfiring, if there's no choice presented then no choice will be made.
But that's all by-the-by. The most important lesson is to recognise that we're part of a network. We are none of us entirely responsible for our own actions. How could we be, given that we require information to form a conclusion, and that information is usually brought to us by other people ? I do not believe this requires us to give up the notion of free will or personal responsibility in the slightest. It only suggests that instead of blaming everyone for being inherently bad or stupid, we shift a little bit of that blame elsewhere. Altering our own behaviour in personal interactions is a good start, but that's all. If we want to effect real change, we have to direct our attention where it can do the most good. Indeed, if we accept the basic conclusions herein, our sense of personal responsibility should compel us to act, not to give up and say, "meh, not my monkey, not my circus".
But how ? We as individuals don't have full control over the networks we find ourselves in... but we can damn well talk to those who do. We can and should give them advice, to tell them what isn't working. We can also demand that ethics training be taken seriously and be mandatory for company executives, not just a totally uninteresting lecture that the grunts have to endure. Moral philosophy can be explored intelligently and engagingly; it cannot be something that executives of companies that thrive on information are allowed to brush under the carpet.
Accusations that humanity's technical prowess exceeds its wisdom are common. Perhaps they're right, but equally, perhaps it doesn't have to be that way. So we can take control of our actions and say to those controlling the information flow :
Moreover, recently the big social media players have cracked down heavily on fake news sources. Who has mourned their passing ? No-one. Where, after its removal, was the cry to restore it under the name of freedom of speech ? Nowhere. No-one really thinks that fake news is so ambiguous that it can't be identified with sufficient accuracy to remove it, or that its removal would somehow cause massive harm, no matter the hoo-hah they make before it actually happens.
Suppose though that we succeeded in completely removing the abject lies of fake news on the internet. We would, contrary to popular belief, still be an enormously long way from making the world a better place. Fake news has only ever been a small part of our information flow and would never succeed at all if the soil for it wasn't fertile; our messages to the still-dominant broadcasters and newspapers would have to be quite different to those to the social media moguls. The worst case of all to deal with is a subjective concept that's become absolutely mainstream and is accepted by a large fraction of the populace, when the hyperpartisanship that genuinely makes all sides mean and dumb is standard practise. You'll forgive me, of course, when I throw up my hands and say, "I don't know how to deal with that".
So yeah, it's complicated. Sometimes banning can work, sometimes it can make things worse. We will never satisfy everyone : what someone thinks is an essential freedom to have can be something other people think it's essential to be free from; one person's progressive, corrective policy is another's harsh injustice.
Society is shaped by the flows of information and money. There is no known way of controlling either in a way that satisfies everyone, and perhaps there never will be. But the basic model that a lot of people cling to - "I can say something offensive but you can say something offensive back to me" - hardly looks like a workable solution, or at the very least it's one riddled with problems and exceptions. I'm not trying to claim anything as grandiose as a better model for society. All I'm saying is that as long as we treat information as something special and set apart from the other aspects of society, granting it unique levels of freedom (whilst, especially in the American case, not recognising that freedom from is also a kind of freedom), then we'll be stuck dealing with the same old rabble-rousing demagogic populists we've always struggled with.
This doesn't mean we should throw up our hands in despair. It doesn't have to be like this...
... or like this :
... instead, it can be like this :
As individuals, we can and should learn about persuasive techniques - after all, most debates we have in everyday life don't feature crazy lunatics rambling on about killer cheese or gay frogs. But alas, the days when total bollocks was safely locked away in obscure internet chatrooms are over. The internet crazies have been well and truly unleashed, so we have to know how to deal with them... but also, perhaps more importantly, the much more numerous kind of people who agree with their sentiments rather than their statements. There's little enough point in trying to reason with the inherently malevolent, darker side of humanity, but we have to work with the more casual and much more numerous followers.
We can also see how the success of those techniques is strongly context-dependent. Ideas can be kept fringe or mainstream through a combination of praise and shame just as much as they can through reasoned argument, and then the level of support for an idea is somewhat self-sustaining. But when someone does change their mind, those same techniques can fail : arguments which win hearts don't always change minds. It's not that we should never use careful arguments or really stupid memes to convince people, it's that their success depends on both the individual and the network. It's easy to whip up emotive rhetoric*, but overall, it's probably much better to be seen as fair, impartial, and well-intentioned. That way, when you do need to strongly condemn something, your established trust plays a huge role in winning support from those you would otherwise alienate. Even (or especially) genuine truth-seeking debates are about much more than the raw ideas themselves.
* See all of twitter and its feckin' stupid outbursts whenever any celebrity does something even very mildly provocative.
This doesn't mean we have to give group hugs to Nazis or start reading The Communist Manifesto in shady bars, nor should we avoid criticising ideas we find abhorrent or even avoid venting anger through rants and mockery; again, criticism is not regulation. Indeed, if we try and completely suppress our feelings the long-term results will only be much worse for everyone. What we should try and do is avoid only ever using mockery and derision in place of thoughtful, considered arguments, using nothing but memes to deal with ideas that are actually rather complex. Going on the occasional expletive-laden rant does one the world of good (and we should recognise that some people doing this have actually thought about the issues very deeply, even the coolest heads need to blow off steam from time to time); in contrast, never exploring things in a more measured way is self-defeating.
You think that liberals/conservatives/firemen/ostrich farmers/fly fishing enthusiasts [delete as appropriate] are so awful that there's no hope trying to reason with them ? Maybe you're right; as we saw last time, their beliefs may have nothing much to do with evidence or logic, so your counter-arguments shouldn't be expressed using evidence or logic either. If you can't bring yourself to do this, or just can't figure out how to do it effectively, there's something else you can try instead of shouting at them : don't argue with them at all. Instead, try and examine their crazy ideas in as cool and calm a way as you can. Think of the most rational reason you can why the believe what they do, the most irrational reasons why you believe what you do, and ways of undermining your own and their beliefs. Don't stop venting or posting memes, but for God's sake occasionally do a more thoughtful analysis as well.
And write them down. Especially if you're an expert and you see some wrong-headed idea has taken root : don't keep it to yourself, post it somewhere people can read it. Right or wrong, change is often a slow, messy, painful process. But despite the risks of backfiring, if there's no choice presented then no choice will be made.
But that's all by-the-by. The most important lesson is to recognise that we're part of a network. We are none of us entirely responsible for our own actions. How could we be, given that we require information to form a conclusion, and that information is usually brought to us by other people ? I do not believe this requires us to give up the notion of free will or personal responsibility in the slightest. It only suggests that instead of blaming everyone for being inherently bad or stupid, we shift a little bit of that blame elsewhere. Altering our own behaviour in personal interactions is a good start, but that's all. If we want to effect real change, we have to direct our attention where it can do the most good. Indeed, if we accept the basic conclusions herein, our sense of personal responsibility should compel us to act, not to give up and say, "meh, not my monkey, not my circus".
But how ? We as individuals don't have full control over the networks we find ourselves in... but we can damn well talk to those who do. We can and should give them advice, to tell them what isn't working. We can also demand that ethics training be taken seriously and be mandatory for company executives, not just a totally uninteresting lecture that the grunts have to endure. Moral philosophy can be explored intelligently and engagingly; it cannot be something that executives of companies that thrive on information are allowed to brush under the carpet.
Accusations that humanity's technical prowess exceeds its wisdom are common. Perhaps they're right, but equally, perhaps it doesn't have to be that way. So we can take control of our actions and say to those controlling the information flow :
"Hey, Bob, look at this research from social philosophy. Did you know that developing trust requires prolonged relationships and interdependency ? Did you know that clickbait is driving polarisation ? See how hard it is for people to break up groupthinking collectives ? Now, Bob, I know you're a good guy, and I don't wanna hurt your business. But you started off as a bloke in a toolshed, and now you're running a company worth eight hundred billion dollars which has millions of subscribers. So ya gotta change, Bob. You gotta keep innovating. That's what made you rich in the first place, after all.
There's a number of things you can try, Bob. You might wanna think about restricting some of that more extreme stuff, and maybe a few of the worst ought to be reported to the police. I know your ideals and business model might say otherwise, but we've got some guidelines as to how to do this without killing discussions or making things worse. We're not saying you should try and regulate everything. We don't want that at all. In fact, we think there are some pretty good reasons why that wouldn't even work.
And you can do something much better than simply taking down propaganda, Bob. Have a read of this stuff about small world networks. People do like a good rant, Bob, but they also like interesting and meaningful discussions without feeling like they're being attacked for the sake of it. How 'bout you try designing a system that cultivates that instead of exploiting anger and addictive tendencies, eh ? You could try a system based around common interests rather than ideologies, stuff that's got a better shot at bringing people together than politics and identity. Could be worth a shot, eh Bob ?"It won't work for everything, but it certainly seems worth a go to me.
Moreover, recently the big social media players have cracked down heavily on fake news sources. Who has mourned their passing ? No-one. Where, after its removal, was the cry to restore it under the name of freedom of speech ? Nowhere. No-one really thinks that fake news is so ambiguous that it can't be identified with sufficient accuracy to remove it, or that its removal would somehow cause massive harm, no matter the hoo-hah they make before it actually happens.
Suppose though that we succeeded in completely removing the abject lies of fake news on the internet. We would, contrary to popular belief, still be an enormously long way from making the world a better place. Fake news has only ever been a small part of our information flow and would never succeed at all if the soil for it wasn't fertile; our messages to the still-dominant broadcasters and newspapers would have to be quite different to those to the social media moguls. The worst case of all to deal with is a subjective concept that's become absolutely mainstream and is accepted by a large fraction of the populace, when the hyperpartisanship that genuinely makes all sides mean and dumb is standard practise. You'll forgive me, of course, when I throw up my hands and say, "I don't know how to deal with that".
So yeah, it's complicated. Sometimes banning can work, sometimes it can make things worse. We will never satisfy everyone : what someone thinks is an essential freedom to have can be something other people think it's essential to be free from; one person's progressive, corrective policy is another's harsh injustice.
Society is shaped by the flows of information and money. There is no known way of controlling either in a way that satisfies everyone, and perhaps there never will be. But the basic model that a lot of people cling to - "I can say something offensive but you can say something offensive back to me" - hardly looks like a workable solution, or at the very least it's one riddled with problems and exceptions. I'm not trying to claim anything as grandiose as a better model for society. All I'm saying is that as long as we treat information as something special and set apart from the other aspects of society, granting it unique levels of freedom (whilst, especially in the American case, not recognising that freedom from is also a kind of freedom), then we'll be stuck dealing with the same old rabble-rousing demagogic populists we've always struggled with.
The dangers of over-regulation (tyranny, stifling social progress, maintaining injustice, etc.) are so clear that I need not go into them. But everyone forgets or seeks to excuse the equal dangers of under-regulation, seemingly unaware that the kinds of "freedom to" granted to speech are truly extraordinary in comparison with other legal sectors. I'm quite happy to be put wrong on this. Present me an alternative workable solution, something more sophisticated than saying "fight speech with speech" - because we already know that doesn't always work, and sometimes fails very badly - and I'll listen. No-one should think there's some perfect method we can apply here. But dammit, I think we can find a much better solution than letting crazy people say whatever the hell they want and pretending we'd be worse off if we tried to stop them. In the end, that simply isn't true.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.