The Best Cure for Fake News is Fake News

24/2/2021

We’re living through a pandemic of misinformation, and as a consequence many minds have fallen sick with conspiracy theories. Recently we’ve seen wildfires blamed on space lasers, COVID blamed on Bill Gates, and Donald Trump’s election loss blamed on Satanic paedophiles.

 

The ugly real-world consequences of such beliefs, from the vandalism of 5G network towers in the UK to the storming of the Capitol in the US, have put tech companies under renewed pressure to crack down on the spread of conspiracy theories. They’ve sought to do this mostly by restricting suspect content – hiding it behind warning labels, de-indexing it from search engines, or outright deleting it from the web. The tech giants are now looking to open up new fronts in their war against misinformation; a few days ago some of them, including Facebook, Twitter, and Google, released an industry code pledging to more rigorously police content on their platforms in Australia, and it’s likely they’ll soon adopt similar codes elsewhere.

 

Unfortunately, such pledges are of little worth, because the tech giants’ efforts are doomed to fail. Not only does the policing of conspiracy theories do nothing to stop their spread, it actually spreads them further. In fact, the most effective way to fight fake news is to do the very opposite of what is being done, and to simply let conspiracy theories run rampant.

The most effective way to fight fake news is to do the very opposite of what is being done, and to simply let conspiracy theories run rampant.

The record of censoring harmful ideas speaks for itself. Here in the UK, police and tech companies have been working for 20 years to suppress the online spread of two problematic worldviews: jihadism and neo-Nazism. The result has been that jihadism remains the largest terrorist threat and far-right extremism is now the fastest growing threat in the country.

 

The reason bad ideas cannot be effectively policed is that they are hard to identify and even harder to silence.

 

It’s hard to identify bad ideas because, firstly, the web is too vast to moderate; you can’t police online discourse, only a tiny segment of it. Secondly, moderators, whether human or machine, are fallible and gameable; YouTube’s algorithms, for instance, recently banned a piece of chess commentary for hate speech due to the presence of words like “white,” “black,” and “attack.” Thirdly, by the time moderators recognize a piece of misinformation as a threat, it has already infected countless minds. Manifestations of real-world violence are lagging indicators of what’s happening in the online world, so you cannot moderate the web’s present, only its past.

 

A bigger problem than the difficulty of identifying misinformation is that even when it’s successfully identified, censoring it doesn’t work. Shock-jock and conspiracy-theorist-in-chief Alex Jones was purged from all the major tech platforms in 2018, but this didn’t stop him going viral on Twitter last month and on YouTube last week.

Alex Jones: Louder than ever (Picture Credit: Sean P. Anderson)

Conspiracy theory super-spreaders like Jones typically have such large fanbases that they are uncancellable. But censorship is just as useless against those with far smaller followings, and in fact, it only radicalizes such people further.

 

Following the Capitol riot, which online purges of QAnon content did nothing to prevent, the tech giants responded with…yet another purge of QAnon accounts. The result was not a drop in QAnon activity but a surge in downloads of alternative apps. In other words, the mass ejection of conspiracy theorists from online polite society didn’t silence them, it concentrated them into fringe communities on encrypted apps, where they’ll now form echo-chambers beyond the reach of law or logic and radicalize each other even faster.

 

But censorship doesn’t just push conspiracy theorists deeper down their rabbit holes. It can also help radicalize those without specific interest in such beliefs – so-called “normies” – via the Streisand effect, which makes what’s forbidden more popular by publicizing it and sparking curiosity.

The Streisand effect makes what’s forbidden more popular by publicizing it and sparking curiosity.

This problem is compounded by the fact that censorship is bad optics; it lends credence to the idea that people are being controlled by shadowy forces. Censorship of the theory is used as proof of the conspiracy. To quote Avi Yemini, a correspondent for the right-wing website Rebel News:

Twitter has just CENSORED Trump’s call for peace. That tells us everything we need to know about who is REALLY behind the escalating violence.

 

Few things invite suspicion like megacorporations controlling the flow of the world’s information by policing beliefs without transparency or accountability. And such suspicions are not entirely unfounded. Who, in the end, decides what is true? Who fact-checks the fact-checkers? Is a world in which truth is enforced by faceless bureaucrats and secret algorithms really less dystopian than the wildest conspiracy theories?

 

Now let’s assume that everything I’ve just said is false, and censoring bad ideas does in fact slow their spread. Even then, censorship would still be the wrong way to fight misinformation.

 

To understand why, consider a common analogy. Conspiracy theories are often understood as viruses that use the web to spread. In such a scenario, the ideal of an internet fully insulated from conspiracy theories would be akin to a sterile environment. And what are the long-term consequences of living in such an environment? Experimenters tried it with mice, who ended up becoming more prone to disease than control groups. The mice’s immune systems, never tested by microbes, failed to develop effective responses to them. And the same is true of our psychological immune systems; a mind unaccustomed to deceit is the easiest to deceive. You don’t stop people believing lies by making them dependent on others to decide for them what is true, and that is exactly what an internet quarantined against misinformation would do.

A mind unaccustomed to deceit is the easiest to deceive. You don’t stop people believing lies by making them dependent on others to decide for them what is true.

Fortunately, there may be a real solution. Just as sheltering people from a harm can make them more vulnerable to it, exposing them to that harm can strengthen them against it. This is how vaccines work; by exposing a subject to a controlled dose of a pathogen so the body can deconstruct it and learn how to beat it.

 

Several studies have been carried out to determine the possibility of vaccinating people against fake news. In a 2019 experiment at the University of Cambridge, researchers devised a game called Bad News in which people tried to propagate a conspiracy theory online. In order to succeed, they had to deconstruct the conspiracy theory to determine what was persuasive about it and how it could be used to exploit emotion. After playing the game, people became more aware of the methods used to push conspiracy theories, and thereby became more resistant to them.

 

The study had a robust sample size of 15,000, and its results were replicated with another, similar game, Go Viral. The authors of the studies concluded that exposing a group of people to weak, deconstructed conspiracy theories could fortify their psychological immune systems against stronger ones, just like a vaccine.

There are shortcomings with the vaccination analogy, though. A vaccine requires only a small number of doses, because the immunity it offers is usually substantial. The immunity offered by playing the misinformation games, on the other hand, was relatively modest. The game didn’t make people invulnerable, just slightly less vulnerable.

 

There is, unfortunately, no real vaccine for conspiracy theories, but the same principle of controlled exposure can be used to help people guard against them. The process is known as hormesis, and, unlike vaccination, it tends to involve not a few doses but constant exposure, so that the body gradually builds up lasting immunity.

 

Ancient Indian texts describe the Visha Kanya, young women raised to be assassins in high society. They would reportedly be raised on a diet of low-dose poisons in order to make them immune, so they could kill their victims with poisoned kisses. A few centuries later, King Mithridates of Pontus was said to have been so paranoid of being poisoned that he had every known poison cultivated in his gardens, from which he developed a cocktail called Mithridate that contained low doses of every known poison, which he’d regularly imbibe to make himself invulnerable. It’s said he was so successful that when he tried to commit suicide by poison after his defeat by Pompey, he failed, and had to ask his bodyguard to do the job by sword.

King Mithridates, depicted in Mithridates Falls in Love with Stratonice, by LouisJeanFrançois Lagrenée

The process of Mithridatism, as it has come to be called, may be what is needed to increase our resistance to conspiracy theories: a dietary regimen of low-strength Kool-Aid to gradually confer immunity to higher doses.

 

But how exactly do we use Mithridatism to immunize entire populations from misinformation? It’s not feasible to expect everyone to download the Bad News game and regularly play it. The answer is to think bigger: to turn the entire internet into the Bad News game, so that people can’t browse the web without playing. This isn’t as difficult as it sounds.

 

In the 1950s, a group of avant-garde artists known as the Situationists thought the masses had been brainwashed by big corporations into becoming mindless consumers. Seeking to rouse them from this state, the group employed a tactic called “detournement”: using the methods of consumerist media against itself. This involved disseminating “subvertisements,” replicas of popular ads that had subtle and ironic differences which made it hard for audiences to distinguish between marketing and parody. The purpose was to foster suspicion and familiarize people with the methods used to manipulate them by making them overt. If people were always unsure whether what they were seeing was art or advertisement, satire or sincerity, they would become more vigilant.

Modern subvertisement

The tactic of detournement was refined in the 1970s by the writer Robert Anton Wilson, who, with his friends in media, tried to seed into society the idea of Operation Mindfuck, a conspiracy theory intended to immunize people against conspiracy theories. It alleged that shadowy agents were orchestrating an elaborate plot to deceive the public for unknown reasons. The most important feature of the plot was that it could be happening anywhere and anytime. Was the ad you just watched genuine, or part of Operation Mindfuck? Was that captivating political speech sincere, or part of Operation Mindfuck? The idea was that if everyone was watchful for the Operation, they would become suspicious of everything they saw, and therefore less vulnerable to other conspiracy theories.

 

Neal Stephenson’s sci-fi novel, Fall, or Dodge in Hell, illustrates how this idea could work in the digital age. In the story a woman named Mauve becomes an unwilling celebrity after being doxxed online. Eager to fight accusations against her, she turns to a man named Pluto, who has an unusual plan: instead of trying to silence the accusations, he’ll fill the web up with accusations, each different, some believable, others ridiculous, to make it impossible to separate truth from lies, and make the public suspicious of all accusations.

 

This idea has even been used in the real world. Following accusations of Russian meddling in the US election, Putin’s supporters didn’t respond by denying the allegations, but by increasing them. They’d post pictures of, say, a man falling off his bicycle, with a caption reading: “Russians did it!” Eventually, “Russians did it!” became a meme, at which point, accusing the Russians of being behind any plot made you look worse than wrong; it made you look predictable and cliched.

This approach can be used not just to propagate misinformation, but also to fight it. One of Stephenson’s inspirations for his novel was the idea of the Encyclopedia Disinformatica, a hypothetical information source proposed by computer scientist Matt Blaze in the 1990s. The idea was that the encyclopedia would be considered the go-to information source, like Wikipedia is now, but would contain as many lies as facts, so that anyone who trusted it would soon find their trust was misplaced, and thereby learn an invaluable lesson about not believing what they read online. The web itself could offer similar lessons; if people see enough fake news online, they’ll be more inclined to cross-check everything else they read, and in so doing gain a measure of immunity against not one conspiracy theory but against all of them.

 

The best part is that creating this environment, this digital Garden of Mithridates, is the easiest thing in the world. All we do is nothing. The web is already an Encyclopedia Disinformatica that can teach us not to trust what we’re told – provided we’re able to see enough of it. The tech giants need only allow the natural ecology of the web to run its course by refraining from trying to curate our realities; this means not just not censoring, but also not railroading people into simplistic worldviews through recommendation algorithms that give them more of what they’ve already seen. Conspiracy theories are not beaten by curating reality – they spring from it.

 

The web, like the world, will always be full of misinformation. Instead of trying in vain to hide the lies, we should seek to reveal them, so people can understand the kind of world they’re living in, and adapt accordingly. The alternative is to continue along the current path of ever tightening quarantine, which, far from weakening conspiracy theories, will weaken only our resistance to them.