The Department of Defense is issuing AI ethics guidelines for tech contractors
In 2018, when Google employees found out about their company's involvement in Project Maven, a controversial US military effort to develop AI to analyze surveillance video, they weren't happy. Thousands protested. We believe that Google should not be in the business of war," they wrote in a letter to the company's leadership. Around a dozen employees resigned. Google did not renew the contract in 2019.
Project Maven still exists, and other tech companies, including Amazon and Microsoft, have since taken Google's place. Yet the US Department of Defense knows it has a trust problem. That's something it must tackle to maintain access to the latest technology, especially AI-which will require partnering with Big Tech and other nonmilitary organizations.
In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls responsible artificial intelligence" guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.
The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided-both before the system is built and once it is up and running.
There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail," says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines.
The work could change how AI is developed by the US government, if the DoD's guidelines are adopted or adapted by other departments. Goodman says he and his colleagues have given them to NOAA and the Department of Transportation and are talking to ethics groups within the Department of Justice, the General Services Administration, and the IRS.
The purpose of the guidelines is to make sure that tech contractors stick to the DoD's existing ethical principles for AI, says Goodman. The DoD announced these principles last year, following a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businesspeople set up in 2016 to bring the spark of Silicon Valley to the US military. The board was chaired by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Rus, the director of MIT's Computer Science and Artificial Intelligence Lab.
Yet some critics question whether the work promises any meaningful reform.
During the study, the board consulted a range of experts, including vocal critics of the military's use of AI, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven protests.
Whittaker, who is now faculty director at New York University's AI Now Institute, was not available for comment. But according to Courtney Holsworth, a spokesperson for the institute, she attended one meeting, where she argued with senior members of the board, including Schmidt, about the direction it was taking. She was never meaningfully consulted," says Holsworth. Claiming that she was could be read as a form of ethics-washing, in which the presence of dissenting voices during a small part of a long process is used to claim that a given outcome has broad buy-in from relevant stakeholders."
If the DoD does not have broad buy-in, can its guidelines still help to build trust? There are going to be people who will never be satisfied by any set of ethics guidelines that the DoD produces because they find the idea paradoxical," says Goodman. It's important to be realistic about what guidelines can and can't do."
For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners argue should be banned. But Goodman points out that regulations governing such tech are decided higher up the chain. The aim of the guidelines is to make it easier to build AI that meets those regulations. And part of that process is to make explicit any concerns that third-party developers have. A valid application of these guidelines is to decide not to pursue a particular system," says Jared Dunnmon at the DIU, who coauthored them. You can decide it's not a good idea."
Margaret Mitchell, an AI researcher at Hugging Face, who co-led Google's Ethical AI team with Timnit Gebru before both were forced out of the company, agrees that ethics guidelines can help make a project more transparent for those working on it, at least in theory. Mitchell had a front-row seat during the protests at Google. One of the main criticisms employees had was that the company was handing over powerful tech to the military with no guardrails, she says: People ended up leaving specifically because of the lack of any sort of clear guidelines or transparency."
For Mitchell, the issues are not clear cut. I think some people in Google definitely felt that all work with the military is bad," she says. I'm not one of those people." She has been talking to the DoD about how it can partner with companies in a way that upholds their ethical principles.
She thinks there's some way to go before the DoD gets the trust it needs. One problem is that some of the wording in the guidelines is open to interpretation. For example, they state: The department will take deliberate steps to minimize unintended bias in AI capabilities." What about intended bias? That might seem like nitpicking, but differences in interpretation depend on this kind of detail.
Monitoring the use of military technology is hard because it typically requires security clearance. To address this, Mitchell would like to see DoD contracts provide for independent auditors with the necessary clearance, who can reassure companies that the guidelines really are being followed. Employees need some guarantee that guidelines are being interpreted as they expect," she says.