26 April 2023, New York, USA – The United Nations Institute for Training and Research (UNITAR), New York Office hosted a course on Artificial Intelligence and Diplomacy on Wednesday, 26 April 2023.

Mr. Pelayo Alvarez, programme manager of the UNITAR New York Office, representing H.E. Ambassador Marco Suazo launched the event by welcoming the experts and the participants who joined the course and introduced the topic: an intersection between the two critical areas that are AI and diplomacy. As technology advances, it is crucial to understand how it will affect the geopolitical landscape and transform our world and this course is a first step towards being prepared through international cooperation, multilateralism and ethical AI governance.

After the welcoming remarks, Mr. Alvarez thanked Ms. Larisa Schelkin, UNITAR Global Diplomacy teaching faculty, who made the briefing possible, and moderated the course. Ms. Schelkin then went on to introduce the speakers: Eugenio Garcia, Alonso Vera, Matthew Johnson, Jan Marco Müller, Holli Kohl and Peder Nelson.

Dr. Eugenio Garcia started by giving an introduction to AI and Diplomacy, which covered an overview and basic concepts, the key issues to address in AI diplomacy, international governance of AI and the role of the United Nations and multilateralism. He first introduced the concept of diplomacy, and its ancient origins which may be traced back to the first encounters between distinct bands of nomadic Homo sapiens during the Paleolithic period. However, while our ancestors used to send people to talk to opposing parties, we now could soon be confronted with an entirely new entity with cognitive skills and machine intelligence, which could cause an anthropological disruption in the decades to come, and this change will also translate in the diplomatic arena.

Dr. Garcia tackled the subject of automation vs. autonomy, defining the latter as a spectrum. According to him, Artificial Intelligence is at the core of the Fourth Industrial Revolution, which is also called the “Deep Learning revolution”. He explained that neural networks are at the center of AI, which are used in various fields, such as image recognition. However, this technology, which relies on mathematical calculations that can sometimes make silly mistakes, this is why it is extremely important to exploit AI’s potential while preventing and mitigating its risks. Some of these risks include long-term challenges such as machines developing human-level cognitive abilities, Artificial General Intelligence (AGI) and superintelligence, and near-term concerns, including but not limited to ethical and societal implications, algorithmic bias and the impact on the economy. While experts have developed AI principles in order to promote the responsible and safe use of AI, AI diplomacy will have to deal with the implementation of these principles as it will play a central role in negotiating disputes and strategic arrangements.

Dr. Garcia presented some challenges in the global arena, such as growing technological competition, skepticism and mistrust toward the multilateral system. These challenges are among the causes of the lack of normative instruments and policies when it comes to AI governance at the international level. The present fragmented landscape calls for coordination between States in order to avoid unintended consequences. AI policymaking would develop responsible AI strategies and a regulatory approach to mitigate risks and avoid strategic uncertainty. He mentioned some ongoing initiatives by key actors such as the European Union, the Council of Europe or the OECD, which have the potential to create an environment for international cooperation among different stakeholders.

Finally, Dr. Garcia raised the question of the United Nations’ role in AI governance at the global level. One of the initiatives of the UN is the “AI for Good” global summit to exchange best practices on beneficial AI and their relation with the SDGs. Other initiatives include UNICRI’s Center for AI and Robotics in The Hague, the High-Level Panel on Digital Cooperation and current negotiations on the Global Digital Compact. Additionally, in 2021, UNESCO approved a landmark Recommendation, the first global standard-setting international instrument on the ethics of AI and has been developing a global “Ethical Impact Assessment” for AI technologies. According to Dr. Garcia, the UN could indeed promote International Cooperation and AI policymaking and can offer a multilateral platform for discussions on AI.

After that Dr. Matthew Johnson and Dr. Alonso Vera went on to define human intelligence, in addition to artificial intelligence and how it is used, because according to Dr. Johnson AI is heavily misrepresented in the media. Dr. Johnson started by defining humans as adaptive problem-solving engines with general intelligence. This general intelligence is the main challenge for experts. Ever since the 1950s, AI has been divided into two categories: weak AI, which is linked to pattern recognition, and strong AI, which is in charge of knowledge and rule-based systems. While the latter is progressing very slowly, pattern analysis tricks us into thinking more cognition is involved in AI. Dr. Vera went deeper into how AI works, emphasizing that the term “Artificial Intelligence” is too general to be useful for a diplomatic conversation. Machine learning relies on data to teach computers how to understand patterns. Contrary to popular belief, this mechanism is heavily dependent on human beings. Therefore, according to Dr. Vera, there are no autonomous systems. Another misconception is about replacement, AI will change the nature of people’s work rather than replace them. The goal of building intelligent systems is to create interdependence, so the core issues in human-machine teaming are related to mutual observability, predictability, directability as well as explainability.

Finally, Dr. Johnson summed this part up by stating that AI is not currently progressing towards general intelligence and that is why collaboration is crucial. The opportunities of AI are informing human decision-making, as it allows us to understand dynamically changing environments. However, the current danger of AI is that it may make up facts through pattern analysis as it doesn’t understand the data.

Afterwards, Dr. Jan Marco Müller intervened to share his observations on AI as someone who has worked at the interface of science, technology, and diplomacy. He noted that the image of science and tech diplomacy has changed as the world goes towards a multipolar and fragmented geopolitical scene. As policy issues are becoming more systemic and interconnected, AI can help us understand and make sense of these phenomena. Dr. Müller also noted that there is an increased pressure on the global commons, which are becoming politicized, commercialized, or militarized, alongside technology and all of this has contributed to a loss of trust between countries.

AI has a lot of potential for diplomacy, e.g., by its use in rapid damage assessments and predictive tools. However, we shouldn’t underestimate the negative aspects such as the growing difficulty to discern what is true, which poses the risk of people fabricating evidence. Another issue is foreign interference, and deep fakes, which could potentially trigger wars. Finally, the bias and mentalities of the people programming the AI may convey non-democratic values. Dr. Müller asks the question of whether AI will help us overcome inequalities or deepen them. Furthermore, it is interesting to observe how AI will change the way we do diplomacy, especially during the social media era. We need to train diplomats in understanding these issues, more specifically in how they can use AI to support their work, and how to not be misled.

For the last part of the seminar, Holli Kohl and Peder Nelson talked about the use of AI in the GLOBE Program, an international science and education program sponsored by NASA and supported by other sectors of the United States Federal Government. GLOBE collects environmental data and is a powerful tool for diplomacy, science and education. The GLOBE Observer application brought AI into the GLOBE Program in order to allow people to collect environmental data without extensive training. Thanks to the GLOBE observer protocols, anyone can observe clouds, land cover, tree height and mosquito habitat. The users are able to upload photos, which go into the database. At first, these photos were screened manually by the staff, which took a lot of time and delayed the process. That is why AI was introduced as a more cost-effective solution to safety and privacy screening. It also labels the photos to allow for quality assurance. The only unintended consequence of the AI was that it blurred all text, even when it was useful.

Mr. Nelson then talked about what more we can do with AI. According to him, it is really important to think about ethical issues such as our intentions and goals for using the machines, and why we don’t delegate everything to AI. Humans have lenses when they are looking at data, especially regarding long-term implications, and according to him, we should be mindful of what we do today as it will influence future generations.

After the presenters answered questions from the participants and delivered closing remarks, Dr. Schelkin and Mr. Alvarez closed the course.

 

Share with