Opinion – ChatGPT and the Threat to Diplomacy

47

Historically, British Prime Ministers have taken a dim view of diplomats. Lloyd George stated that diplomats were invented simply to waste time. Edward Heath defined a diplomat as a man who thinks twice before saying nothing. The digital revolution threatened to make diplomats not only mute but obsolete. Scholars writing in the early 2000s echoed the sentiment of British Prime Ministers arguing that foreign ministries were change resistant institutions burdened by rigid working routines and centuries’ old protocol. Diplomats thus lacked the ability to adapt to new digital surroundings. Yet time has proven these critics wrong, and today diplomats can best be defined as digital innovators.

Over the past decade diplomats have adopted a host of digital technologies by launching Embassies in virtual worlds, creating social media empires, designing consular smartphone applications, and even embracing the digital ethos of transparency while live-tweeting debates in diplomatic forums. Some foreign ministries have proven to be especially tech-savvy such as the British Foreign Office which created a big data unit or the Israeli ministry that authored its own algorithm to combat hate speech online.

Through trial and error, digital experimentation and the occasional faux pas, diplomats have migrated safely to the 21st century. However, diplomacy now faces an unfamiliar digital challenge- that of ChatGPT. Launched in November of 2022, this generative AI chatbot has been the subject of intense media coverage and debates. As ChatGPT passed exams into prestigious law faculties and medical schools, educators warned of its use to write university essays or academic papers. Lawyers have cautioned of petitions generated within minutes while legislators have expressed concern over laws written by AI systems. Few news reports, however, have focused on ChatGPT’s impact on diplomacy, an important issue as ChatGPT assumes the role of an information gatekeeper, much like Google.

ChatGPT may facilitate the work of diplomats. Ambassadors could use it to quickly generate press releases or UN addresses. Digital diplomacy departments may save time by automating the generation of tweets. Foreign ministries may even use chatbots to automate the provision of consular aid. More importantly, diplomats could leverage ChatGPT’s ability to analyze swarms of data to prepare for negotiations. Before meeting with Russian counterparts, a Western diplomat could use ChatGPT to summarize all Russian statements regarding the future of the Donbas An EU diplomat could use ChatGPT to synthesize all British press statements relating to Brexit in preparation for a new round of talks.

Yet ChatGPT also presents challenges for diplomacy. The first is the use of ChatGPT to learn about world events. Users can ask ChatGPT questions about nations, actors and events. Yet biases in ChatGPT answers, and the generation of misleading or even false information would create alternate realities for these users. One example is a ChatGPT-generated reality in which Russia did not bomb Syria, or a reality in which the EU is on the brink of dissolution or a reality in which Covid19 was engineered by George Soros.

 

 

Although the challenge of disinformation has existed for some time, ChatGPT confounds this issue because of its positive depiction in the media, because it is “smart” enough to pass medical exams and because of the growing mystification of AI. In recent months, Gordon Geckko’s famous phrase “Greed is good” has been replaced with the phrase “AI is good”. But of course, AI can be wrong, biased, and even misleading.

The greater the gap between reality and ChatGPT-generated realities, the more people will struggle to make sense of the world leading to feelings of estrangement and resentment, the very feelings that populist leaders thrive on. These are the same leaders that, once elected, abandon diplomatic forums and label diplomats as ineffective, unwanted, and immoral. ChatGPT thus threatens the legitimacy of diplomats and diplomatic institutions.

Second, ChatGPT can be used to create false historical documents that serve as the basis for viral conspiracy theories. ChatGPT won’t generate the transcript of a phone conversation between Churchill and Roosevelt discussing the possible bombing of Nazi concentration camps. But it will generate a fictional account of such a conversation. ChatGPT won’t write a speech by Hitler but it’s more than willing to generate a 1942 radio address by Goebbles praising the glory of the Third Reich, or a 1945 address decrying National Socialism’s untimely demise. With its sophisticated vocabulary and nuance for history, ChatGPT generates the kind of content that is eagerly shared online, and which further drives political extremity. Political extremity is the undoing of diplomacy as publics increasingly reject any form of compromise.

The third challenge lies in ChatGPT’s depiction of different countries. Ask ChatGPT to list ten negative things about visiting France and it will mention long lines at tourist attractions, language barriers and high prices. Ask it to list 10 negative things about visiting Nigeria and it will mention violence, pollution, political instability and corruption. As such, ChatGPT may perpetuate stereotypes and sustain inequalities between the Global North and the Global South. Reducing such inequalities has long since been a diplomatic priority for both the EU and nations in the Global South.

ChatGPT can also impact a nation’s reputation or threaten to undo years of diplomatic work. Poland, for instance, has dedicated digital resources to distancing itself from the atrocities of WW2. But according to ChatGPT, Poland has a legal and moral obligation to pay reparations to Jews who lost property in WW2. ChatGPT is also quite adamant that repeatedly criticizing Israel is not a form of anti-Semitism. A valid opinion yet one that the Israeli MFA has consistently fought against in elaborate social media campaigns.

As generative AI chatbots become fixtures of daily life, diplomats must experiment with these tools, identify potential risks, and then work with AI companies to mitigate such risks. Failure to do so will merely replicate the huge challenge diplomats still face when trying to regulate or reform social media platforms. Time is of the essence.

https://bityl.co/I7D5
https://www.bidd.org.rs/digital-diplomacy-2/