Last week, the Financial Times published an AI generated image of Presidents Putin and Trump kissing. The headline read “Fakes in the Post-Truth Era”. The term post-truth was first coined in 2016 by The Economist Magazine. The Economist was referring to the impact of social media on politics in general, and American politics in particular. The article argued that a new breed of politicians, who owed their popularity to social media, had taken the art of lying to new heights. These politicians no longer feared being caught telling a lie. In fact, they were proud of their lies and defiantly argued that all politicians lie, but only great leaders admit to lying. In some cases, such as Donald Trump, the lie was an integral part of leaders’ political brand. Like a trickster, Trump transformed lying into a three-part spectacle made up of 1) the lie 2) the reveal and 3) the defiant response. Politicians’ defiant lies won them the adoration of many social media users who finally found honest politicians, honest about lying at every turn.
Since then, the term post-truth has taken on added meaning and has been used to reference the fact that social media generates an endless number of truths as content is tailored to each user’s preferences. Locked within their algorithmic filter bubbles, different users would encounter different truths. According to social media feed of one user, the truth was that Russia had not invaded Crimea. According to the social media feed of another user, the truth was that unmarked mercenaries had invaded Crimea while according to the feed of another user the truth was the Russia had mounted a stealth invasion of Crimea. The phenomenon of endless truths was soon associated with political fragmentation and political extremity. In a world of many truths there was no longer any truth. What soon followed was anxiety and a desire for radical leaders who could re-create the analog past and re-establish a world with one truth. A world of black and white, true and false, and ally and foe.
Diplomacy in the post-truth era meant negating the truth spread by some actors, while creating appealing narratives to support one’s own truth. Combating disinfromation rested on debunking the assertions of some states, discrediting some spokespersons while enhancing the credibility of others. Diplomats suddenly became the arbiters of truth. Tweets often read “Daesh lies exposed” or “10 facts about NATO’s border with Russia”. Concepts such as de-bunking and pre-bunking became popular among digital diplomacy departments as diplomats’ sought to contend with the growing number of truths spread by state and non-state actors.
In many instances visuals were used to “prove” the truth. NATO satellite images were used to “prove” that Russian troops has crossed over into Ukraine. Images of a bombed Aleppo were used to “prove” that the Assad regime was murdering its citizens. Black and white images were used to “prove” that Daesh was making a fortune from selling alcohol and drugs.
The rise of Generative AI and visual AI in particular ushers another era marked not by multiple truths but by multiple realities. The difference is not mere semantics. Generative AI and visual AI can be used to create highly believable alternate realities. The difference between post-truth and post-reality is that the tools once used to “prove” facts are now used to “prove” falsities. Such is the case with images, videos, and official documents that can all be easily doctored. Post-reality is far more encompassing than post-truth. Truth is derived from reality while reality exists independently. Put differently, the reality in 2014 was that armed individuals had invaded Crimea. Several truths were derived from this reality. In one truth, the armed individuals were Russian. In another truth the armed individuals were not Russian. Yet in both truth the reality was one and the same.
In post-reality we enter an age of endless realities. In one reality armed individuals have invaded Crimea. In another reality, Crimea is free while daily life goes on normally. This reality is well documented. Images of Crimeans going about daily life are shared across multiple media; videos of Ukraine’s President Zelenskyy ensuring the world that no one invaded Ukraine can easily be found while CIA satellite images, shared online, “prove” that no Russian forces have entered Ukraine. Each of these realities can then serve as the basis of many truths. Post-reality is thus a force multiplier. If there are a hundred realities, then there can be a thousand truths as truths are derived from reality. If there are a million realities, there can be 10,000,000 truths. Post-reality scales up the phenomenon of post-truth and creates a world where nothing can be agreed upon. Indeed, even at the height of the Crimean Crisis Russian and American diplomats agreed, behind closed doors, that there were armed individuals in Crimea. The question was who sent them there. But in the world of post-reality people including diplomats won’t even be able to agree that there are armed men in Crimea, or that armed men even exist or that Crimea exists.
Some maintain that this era of post-reality will mostly be driven by deepfakes- highly believable images and videos in which anything can be depicted visually or put in the mouths of individuals. A video of President Biden resigning from office, or admitting to being a Russian sleeper gent, can be created and shared within hours across the globe. Yet textual AI will also play a key role in this era. ChatGPT and the like can be used to create false memos, false emails and even false battle plans documenting Ukrainian plans to attack Russia with chemical weapons. ChatGPT could also be used to create readouts of conversations between leaders or diplomatic notes that were never written. In this era no image, video or document may be trusted. All may be fake. All may be true. And there will be no arbiter as no arbiter can “disprove” a reality or its subsequent truths. For every image of armed individuals in Crimea there will be thousands of images of an average day in Crimea, and for every video of Daesh’s cruelty there will be a thousand videos of its benevolence and caring.
The question is how diplomacy will be able to function in such a world. In a world devoid of reality there is nothing that diplomats can agree on and if diplomats cannot agree they cannot act in unison. Moreover, even if diplomats come to agree on a shared reality, their publics wouldn’t necessarily accept that reality leading to diminished trust in diplomats and heavy opposition to their actions. In this sense the age of post-reality is also the age of post-diplomacy as we know it.
Since the end of WW2, the international diplomatic arena has served as a forum in which diplomats can reach a shared definition of reality and act to impact that reality. For instance, the UN Security Council may be used to create a shared definition of reality in which nuclear proliferation is a global threat. Diplomats can then formulate accords and sign agreements to manage this risk. Yet such forums may be paralyzed in the age of post-reality.
In this sense, Generative and visual AI may spell end of diplomacy.
Mitigating the advent and risks of post-reality may be possible through regulation and international accords on the use and development of AI. Specifically, states will need to create a normative framework in which they themselves refrain from using Generative AI and visual AI to disseminate false yet believable realities. While such a normative framework seems farfetched, as it would have to include Russia and the US, China and the UK, South Korea and North Korea it may be possible given that post-reality threatens to undermine the diplomacy of all states and all state actors.
For more on this issue see a recent academic chapter here.