A new public database lists all the ways AI could go wrong
Adopting AI can be fraught with danger. Systems could be biased, or parrot falsehoods, or even become addictive. And that's before you consider the possibility AI could be used to create new biological or chemical weapons, or even one day somehow spin out of our control.
To manage these potential risks, we first need to know what they are. A new database compiled by the FutureTech group at MIT's CSAIL with a team of collaborators and published online today could help. The AI Risk Repository documents over 700 potential risks advanced AI systems could pose. It's the most comprehensive source yet of information about previously identified issues that could arise from the creation and deployment of these models.
The team combed through peer-reviewed journal articles and preprint databases that detail AI risks. The most common risks centered around AI system safety and robustness (76%), unfair bias and discrimination (63%), and compromised privacy (61%). Less common risks tended to be more esoteric, such as the risk of creating AI with the ability to feel pain or to experience something akin to death."
The database also shows that the majority of risks from AI are identified only after a model becomes accessible to the public. Just 10% of the risks studied were spotted before deployment.
These findings may have implications for how we evaluate AI, as we currently tend to focus on ensuring a model is safe before it is launched. What our database is saying is, the range of risks is substantial, not all of which can be checked ahead of time," says Neil Thompson, director of MIT FutureTech and one of the creators of the database. Therefore, auditors, policymakers, and scientists at labs may want to monitor models after they are launched by regularly reviewing the risks they present post-deployment.
There have been many attempts to put together a list like this in the past, but they were concerned primarily with a narrow set of potential harms arising from AI, says Thompson, and the piecemeal approach made it hard to get a comprehensive view of the risks associated with AI.
Even with this new database, it's hard to know which AI risks to worry about the most, a task made even more complicated because we don't fully understand how cutting-edge AI systems even work.
The database's creators sidestepped that question, choosing not to rank risks by the level of danger they pose.
What we really wanted to do was to have a neutral and comprehensive database, and by neutral, I mean to take everything as presented and be very transparent about that," says the database's lead author, Peter Slattery, a postdoctoral associate at MIT FutureTech.
But that tactic could limit the database's usefulness, says Anka Reuel, a PhD student in computer science at Stanford University and member of its Center for AI Safety, who was not involved in the project. She says merely compiling risks associated with AI will soon be insufficient. They've been very thorough, which is a good starting point for future research efforts, but I think we are reaching a point where making people aware of all the risks is not the main problem anymore," she says. To me, it's translating those risks. What do we actually need to do to combat [them]?"
This database opens the door for future research. Its creators made the list in part to dig into their own questions, like which risks are under-researched or not being tackled. What we're most worried about is, are there gaps?" says Thompson.
We intend this to be a living database, the start of something. We're very keen to get feedback on this," Slattery says. We haven't put this out saying, We've really figured it out, and everything we've done is going to be perfect.'"