The last few years exposed more than ever the fragility of our information ecosystem. Elections, Brexit, social unrest, COVID-19 conspiracies, and the war in Ukraine have been used as disinformation operations by certain actors to undermine faith in governments, raise fear and anger, confuse and manipulate us.
Disinformation is information meant to deceive (trolls posting fake news). In contrast, misinformation is false information with no malicious intent (our friends and family genuinely believe in disinformative content and willingly share it).
Why is Disinformation Effective?
Most disinformation efforts work through a so-called ‘censorship through noise’. Often, disinformation actors push for a story and repeat it across different news outlets that are massively shared on social media by fake accounts of troll farms. The exponential and relatively cheap internet access allowed anyone to post information (be it an article, video, comment, or tweet). Consequently, social media users are overwhelmed by the various channels that parrot the same narrative. Some users might believe these stories as they see multiple sources of information as more credible than one.
These stories don’t necessarily need to be accurate and check their facts. Propagandistic efforts are to be among the first to report a story (more chances to warp our beliefs as these accounts prime our attention through mere exposure effect), drown competing reports, or sow doubt and scepticism.
Often, propaganda information is reiterated by more credible and legitimate news sources, reinforcing falsehoods.
Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy.
A tobacco industry executive in 1969 about how cigarette companies should deal with the growing ‘body of fact’ linking smoking and diseases.
This quote about planting doubt is even more relevant in modern propaganda, where we see our acquittances, friends, or family slip away from us as they fall prey to disorienting news.
An essential reference to detect propaganda is the propwatch website, where dozens of propaganda techniques are disseminated. Each link for the corresponding method shows video examples of propaganda in practice.
One of the most insidious propaganda techniques is the combination of the Gish gallop with Brandolini’s law. Gish gallop is named after the creationist Duane Gish, who used this approach as an overwhelm of claims, with no regard for their relevance or accuracy. For somebody using the Gish gallop, through misinterpretation, evidence cherry picking, whataboutism, or hyperboles, the most important thing is to give the impression to a casual observer that they are on the right side and their opponents are flooded with arguments to each they have no answer. Gish gallop works because of Brandolini’s law (“bullshit asymmetry principle”). Alberto Brandolini coined this law after watching a political talk show with former Italian Prime Minister Silvio Berlusconi and investigative journalist Marco Travaglio. The law states
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
contrasting the relative ease of creating disinformation versus the colossal efforts of debunking it. It is crucial to understand how disinformation and propaganda spread through written words so that we build better countermeasures.
Ben Nimmo, former Director of Investigations for network analysis firm Graphika, presents his 4D model of predicting disinformation.
- Dismiss: if you don’t like what your critics say, insult them.
- Distort: if you don’t like the facts, twist them.
- Distract: if you’re accused of something, accuse someone else of the same thing (whataboutism).
- Dismay: if you don’t like what someone else is planning, try to scare them off (as we can see currently with nuclear threats).
Interestingly, Nimo is now Global Threat Intel Lead at Meta, the company behind Facebook, Instagram and WhatsApp. Nimo’s Twitter feed (or the non-Twitter version) gives detailed information about current influence ops targeting Europe or the United States.
How to Counteract Disinformation
Nimo also has a few rules on defending against online disinformation campaigns:
Rule #1: Think about the emotional targeting that is going on before clicking on a story because disinformation actors do not care about truth.
“the goal of a lot of these [disinformation] operations is to make people so angry or so afraid that they stop thinking. And once somebody stops thinking, they’re really easy to manipulate.”
Rule #2: Question the motive behind manipulative headlines: “where is this story spreading?” and “who is picking it up?” These questions can give us an idea of the impact of a specific campaign.
Rule #3: Keep calm and carry on. (so that)
Rule #4: Don’t let exposure lead to chaos.
Ben Nimmo on defending against online disinformation campaigns
We are creatures of habits, and our information diet is nothing different either:
- When reading news, do we read a story proposed by an algorithm or open our favourite news agency?
- The outlet news that we usually seek, do they provide fact-checking or retract articles when proved wrong?
- Did the headline that grabbed our attention trigger a highly negatively charged reaction? Did that emotion make us feel read the article?
- Did the body of the article have the same tone as the headline?
- Did the article feel balanced? Or was it confirming our previous (biased) knowledge?
- Do we dismiss or want to hear more when we hear contradictory information?
- Were all voices represented in the article? Who was missing? Why?
- Did we research on our own to check the statistics or other evidence in the article?
- After reading a news story, do we go and check other sources?
- How compelled do we feel to share this article with others, especially those we know who share our values?
One of the most reputable forums that enforce healthy rules of news consumption is the subreddit r/credibledefense, where users discuss national security issues. Some of their guidelines are:
1.1. Strive to be informative, professional, gracious, and encouraging in your communications with other members here. Imagine writing to a superior in the Armed Forces, or a colleague in a think tank or major investigative journal.
1.2. This is a subreddit dedicated to collating articles, opinion pieces by distinguished authors, historical research, and the research of warfare relating to national security issues.
1.3. The purpose of this subreddit is to learn for ourselves, and to bring better public understanding of related topics.
On their daily megathreads, a set of rules are posted, which, among others, state:
Please do:
* Be curious not judgmental,
* Be polite and civil,
* Link to the article or source of information that you are referring to,
* Make it clear what is your opinion and from what the source actually says,
* Read the articles before you comment, and comment on the content of the articles,
* Leave a submission statement that justifies the legitimacy or importance of what you are submitting,
* Submit articles that will be relevant 5-10 years from now, and not ephemeral news stories
Among the do not rules, they recommend not using memes, emojis, excessive swearing, foul imagery, starting fights with other commenters, making it personal or outing other commenters.
How many of us adhere to these rules before posting something on social media, be it an article or responding to somebody who doesn’t share our point of view? How many times do we stop and try to be curious and polite and not judgmental? Yes, we shouldn’t feed trolls, but not all people that don’t share our point of view are trolls.
Two other subreddits are related in name to r/credibledefence.
One is r/LessCredibleDefence, which has the greeting message: “Welcome to LessCredibleDefence – the home of links which have failed to pass the quality requirement of r/CredibleDefense”.
The other one is r/NonCredibleDefense, home of defense-themed memes, which, unsurprisingly, has a golden rule for their users: “Don’t get us banned.”
Then, there are a few citizen activism movements that are successfully disrupting disinformation.
Extremely relevant is this article about the Elves (cyber activists) in the Baltic States and Central Europe.
NAFO (North Atlantic Fella Organization) is a decentralized online army of dog avatars that propose humour as an effective instrument against propaganda. r/NonCredibleDefense shares plenty of NAFO memes.
We all need to be aware of our news diet and critical thinking (being able to objectively follow news and change our minds if we come across relevant information) limitations. Thinking of Morgan Housel‘s quote,
Tell people what they want to hear, and you can be wrong indefinitely without penalty.
we must ask ourselves: does this quote also apply to us? Do we allow ourselves to become actors of disinformation through sharing malicious content, thus enabling misinformation?
Resources:
Relevant Articles: