AI Missteps Could Unravel Global Peace and Security
This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum, The Institute, or IEEE.
Many in the civilian artificial intelligence community don't seem to realize that today's AI innovations could have serious consequences for international peace and security. Yet AI practitioners-whether researchers, engineers, product developers, or industry managers-can play critical roles in mitigating risks through the decisions they make throughout the life cycle of AI technologies.
There are a host of ways by which civilian advances of AI could threaten peace and security. Some are direct, such as the use of AI-powered chatbots to create disinformation for political-influence operations. Large language models also can be used to create code for cyberattacks and to facilitate the development and production of biological weapons.
Other ways are more indirect. AI companies' decisions about whether to make their software open-source and in which conditions, for example, have geopolitical implications. Such decisions determine how states or nonstate actors access critical technology, which they might use to develop military AI applications, potentially including autonomous weapons systems.
AI companies and researchers must become more aware of the challenges, and of their capacity to do something about them.
Change needs to start with AI practitioners' education and career development. Technically, there are many options in the responsible innovation toolbox that AI researchers could use to identify and mitigate the risks their work presents. They must be given opportunities to learn about such options including IEEE 7010: Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being, IEEE 7007-2021: Ontological Standard for Ethically Driven Robotics and Automation Systems, and the National Institute of Standards and Technology's AI Risk Management Framework.
If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be better empowered to innovate responsibly and be meaningful designers and implementers of regulations.
What Needs to Change in AI EducationResponsible AI requires a spectrum of capabilities that are typically not covered in AI education. AI should no longer be treated as a pure STEM discipline but rather a transdisciplinary one that requires technical knowledge, yes, but also insights from the social sciences and humanities. There should be mandatory courses on the societal impact of technology and responsible innovation, as well as specific training on AI ethics and governance.
Those subjects should be part of the core curriculum at both the undergraduate and graduate levels at all universities that offer AI degrees.
If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be empowered to innovate responsibly and be meaningful designers and implementers of AI regulations.
Changing the AI education curriculum is no small task. In some countries, modifications to university curricula require approval at the ministry level. Proposed changes can be met with internal resistance due to cultural, bureaucratic, or financial reasons. Meanwhile, the existing instructors' expertise in the new topics might be limited.
An increasing number of universities now offer the topics as electives, however, including Harvard, New York University, Sorbonne University, Umea University, and the University of Helsinki.
There's no need for a one-size-fits-all teaching model, but there's certainly a need for funding to hire dedicated staff members and train them.
Adding Responsible AI to Lifelong LearningThe AI community must develop continuing education courses on the societal impact of AI research so that practitioners can keep learning about such topics throughout their career.
AI is bound to evolve in unexpected ways. Identifying and mitigating its risks will require ongoing discussions involving not only researchers and developers but also people who might directly or indirectly be impacted by its use. A well-rounded continuing education program would draw insights from all stakeholders.
Some universities and private companies already have ethical review boards and policy teams that assess the impact of AI tools. Although the teams' mandate usually does not include training, their duties could be expanded to make courses available to everyone within the organization. Training on responsible AI research shouldn't be a matter of individual interest; it should be encouraged.
Organizations such as IEEE and the Association for Computing Machinery could play important roles in establishing continuing education courses because they're well placed to pool information and facilitate dialogue, which could result in the establishment of ethical norms.
Engaging With the Wider WorldWe also need AI practitioners to share knowledge and ignite discussions about potential risks beyond the bounds of the AI research community.
Fortunately, there are already numerous groups on social media that actively debate AI risks including the misuse of civilian technology by state and nonstate actors. There are also niche organizations focused on responsible AI that look at the geopolitical and security implications of AI research and innovation. They include the AI Now Institute, the Centre for the Governance of AI, Data and Society, the Distributed AI Research Institute, the Montreal AI Ethics Institute, and the Partnership on AI.
Those communities, however, are currently too small and not sufficiently diverse, as their most prominent members typically share similar backgrounds. Their lack of diversity could lead the groups to ignore risks that affect underrepresented populations.
What's more, AI practitioners might need help and tutelage in how to engage with people outside the AI research community-especially with policymakers. Articulating problems or recommendations in ways that nontechnical individuals can understand is a necessary skill.
We must find ways to grow the existing communities, make them more diverse and inclusive, and make them better at engaging with the rest of society. Large professional organizations such as IEEE and ACM could help, perhaps by creating dedicated working groups of experts or setting up tracks at AI conferences.
Universities and the private sector also can help by creating or expanding positions and departments focused on AI's societal impact and AI governance. Umea University recently created an AI Policy Lab to address the issues. Companies including Anthropic, Google, Meta, and OpenAI have established divisions or units dedicated to such topics.
There are growing movements around the world to regulate AI. Recent developments include the creation of the U.N. High-Level Advisory Body on Artificial Intelligence and the Global Commission on Responsible Artificial Intelligence in the Military Domain. The G7 leaders issued a statement on the Hiroshima AI process, and the British government hosted the first AI Safety Summit last year.
The central question before regulators is whether AI researchers and companies can be trusted to develop the technology responsibly.
In our view, one of the most effective and sustainable ways to ensure that AI developers take responsibility for the risks is to invest in education. Practitioners of today and tomorrow must have the basic knowledge and means to address the risk stemming from their work if they are to be effective designers and implementers of future AI regulations.
Authors' note: Authors are listed by level of contributions. The authors were brought together by an initiative of the U.N. Office for Disarmament Affairs and the Stockholm International Peace Research Institute launched with the support of a European Union initiative on Responsible Innovation in AI for International Peace and Security.