How Diplomats Can Combat Digital Propaganda

693

James Pamment has written that for most of the 20th century the term public diplomacy was associated with the term propaganda. According to the Oxford Dictionary propaganda relates to information, especially of a biased or misleading nature, used to promote a political cause or point of view. During the 21st century, the field of public diplomacy faced a conceptual shift known as the “new” public diplomacy. This shift saw the goals of public diplomacy change from influence and opinion formation to creating long lasting relationships with foreign populations. These relationships, built on dialogue and two way interactions, could be used to create a receptive environment for a country’s foreign policy.

Digitalization promised to facilitate the transition towards the “new” public diplomacy and away from propaganda as digital platforms could be used to converse and create relationships with foreign publics. Social media were especially seen as beneficial to the “new” public diplomacy as they centre on two-way interactions and dialogue, the building blocks of relationships. By 2012, MFAs (ministries of foreign affairs) throughout the world had migrated to Facebook and Twitter in hopes of realizing the benefits of the “new” public diplomacy.

Yet by 2016 the goals of many diplomatic institutions have reverted back to influence. One reason for this reversal of fortune was the growing use of propaganda during the Crimean Crisis of 2014. According to a report by NATO Stratcom, the Crimean Crisis saw a dramatic rise in the amount of questionable information published online by various Russian agents including trolls, state-run media agencies and Bots. These created a toxic information environment in which truth was the first and last victim.

Importantly, digital platforms have proven to be incredibly susceptible to propaganda. This blog post offers three explanations for this susceptibility. Yet before doing so, it is important to identify the goal of digital propaganda. Unlike its definition, and earlier forms of propaganda, digital propaganda does not focus merely on influencing opinions. Rather, its goal is to alter one’s perception of reality or to challenge the very notion that an objective reality exists. Digital publics enter a vortex of lies, half-truths and rumours from which they are meant to emerge baffled, dazed and confused. The more confused they are, the better as confusion breeds insecurity.

One reason why digital platforms are susceptible to propaganda is their visual nature. Images have always played an important role in modern societies. According to Susan Sontag, images serve an evidentiary purpose. They are used in a court of law, or court of public opinion, to prove that something did in fact occur or that someone was in fact somewhere. For this reason, seeing is believing. When we are confronted with an image or a video depicting certain events we are quick to believe that these events really took place. We have been primed to do so through courts of law, news programs and history books that use images as historic proof of events. But in the digital world images are easily manipulated. Any teenager with Photoshop can doctor images or create false images. These are then carried far and wide by digital platforms which thrive on visual content. Indeed studies have found that visual content is the most likely to be shared online.

This is also true of videos. A recent video, developed by researchers in the US, offered a statement by former President Obama. The video was completely doctored thanks to an artificial intelligence program that studied Obama’s facial cues, intonation and speech patterns. At the moment, the researchers can create any video they wish putting whatever words they desire in the mouth of the former President.

The ease with which images and videos can be doctored and spread on digital platforms is one reason why propaganda is flourishing in the digital age. Digital platforms are also susceptible to propaganda as one cannot distinguish between man and machine. Recent years have seen rapid growth in the use of Bots by governments. Bots are automated computer programs meant to mimic human behaviour. The power of Bots stems from their ability to flood digital platforms with comments and emotions that impact our perception of the national mood and pulse. For instance, it is estimated that prior to the Brexit referendum thousands of Bots flooded social media and news sites with pro-Brexit comments and sentiments. These could have had a real impact on voters as people by nature wish to belong to the majority. If one believes that the majority of his network will vote for Brexit he may also do so. Alternatively, people who thought Brexit had no chance of winning may have been motivated to vote for it after assuming, wrongly, that the public mood had changed.

In the offline world we are still able to distinguish between man and robot. The same cannot be said of the online world.

Finally, digital platforms are susceptible to propaganda as algorithms can be used to target specific audiences and present them with information that confirms their biases. For instance, during the Crimean Crisis Russian trolls disseminated a barrage of fake news stories on digital platforms. One story alleged that Pro-Ukrainian extremists had prevented a doctor from saving people from a burning building. Another story alleged that Ukrainian soldiers had built concentration camps in Eastern Ukraine while another stated that they crucified and burnt a young boy. These news stories were then disseminated to various networks and were seen by individuals who may have already been susceptible to such viewpoints, including Russian minorities in Baltic States and Central European States. Similar stories, alleging that white police men were brutalizing African Americans, and that Africa Americans were murdering white police men, were circulated via Facebook ads during the recent US elections. Notably, these ads targeted both African Americans and White conservatives through Facebook algorithms. Digital propaganda can thus be tailored to the world view and opinions of specific audiences and delivered to their accounts in a high degree of accuracy.

What is common to Bots, doctored images and fake news stories is that all three impact digital publics’ perception of reality. When diplomats try to counter propaganda, by circulating true information, they can inadvertently trigger a competition over reality the result of which is the belief that there is no objective reality or truth.

Thus everything becomes true, and everything becomes false and all is matter of opinion.

The susceptibility of digital platforms to propaganda has seen some MFAs and diplomats prioritize information dissemination, and information dominance over relationship building. Yet this might be the wrong remedy. Competitions over the truth might actually benefit the aggressor. What diplomats should focus on is increasing their digital credibility. Online publics may be more willing to accept an actor’s statements and depictions of events if they view him as credible and sincere. Obtaining credibility calls for engaging with digital publics and creating long lasting relationships with them.

The way to defeat propaganda is actually the way of the “new” public diplomacy.

Original Article