Article 5NRMT Open Source Is Throwing AI Policymakers For A Loop

Open Source Is Throwing AI Policymakers For A Loop

by
Ned Potter
from IEEE Spectrum on (#5NRMT)
image.jpg?width=1200&coordinates=0%2C102

Depending on whom you ask, artificial intelligence may someday rank with fire and the printing press as technology that shaped human history. The jobs AI does today-carrying out our spoken commands, curing disease, approving loans, recommending who gets a long prison sentence, and so on-are nothing compared to what it might do in the future.

But who is drawing the roadmap? Who's making sure AI technologies are used ethically and for the greater good? Big tech companies? Governments? Academic researchers? Young upstart developers? Governing AI has gotten more and more complicated, in part, because hidden in the AI revolution is a second one. It's the rise of open-source AI software-code that any computer programmer with fairly basic knowledge can freely access, use, share and change without restriction. With more programmers in the mix, the open-source revolution has sped AI development substantially. According to one study, in fact, 50 to 70 percent of academic papers on machine learning rely on open source.

And according to that study, from The Brookings Institution, policymakers have barely noticed.

"Open-source software quietly affects nearly every issue in AI policy, but it is largely absent from discussions around AI policy," writes Alex Engler, a fellow in governance studies at Brookings and the author of the report.

A few major examples: A newly proposed Artificial Intelligence Act by the European Parliament makes no mention of open source. In the United States, the Obama and Trump administrations gave it only passing attention in their AI strategies. (The Biden administration is just getting started.)

"In many of the meetings I've been in, the role of open-source code functionally never comes up," says Engler in an interview. "It deserves more routine consideration as part of the broader issues that we all care about."

At its heart, open-source software should be a good thing. If more developers are involved, the common belief is that they will improve on each other's work. AI development is dominated by a small number of technology giants-Google, Facebook, Amazon, Apple, Baidu, Microsoft, and so forth-but machine-learning libraries such as Google's TensorFlow and Facebook's PyTorch are there for anyone's use.

"There are web development libraries that are competing against each other," says Engler, "and so that often means that the code is much, much, much, much better than any individual person could write."

The problem, Engler says, is that while developers know are familiar with these libraries, most non-engineers-including many policymakers whose job it is to protect the public interest-are not. And people are being affected ways that they may not even recognize.

The ideal of open source is that many contributors will catch each others' mistakes and biases-but they may also introduce new biases to a piece of software.

Engler cites the problem of hiring discrimination by machines. AI bias has been widely documented (recall Google Photos famously labeling black people as "gorillas" in 2015, or OpenAI persistently linking Muslims with violence), but for all the transparency promised by open source, most people may have no idea when they're victims.

"You might send in a resume, and it might go through an AI system that's discriminatory, it might reject you-and you'll never know that happened," says Engler. "If you don't know you were discriminated against, if you don't know you were evaluated by an algorithm, you can't even tell the EEOC [the U.S. Equal Employment Opportunity Commission]. And in fact, the EEOC keeps saying we're not getting complaints."

Remember also that since open source is based on a faith in the wisdom of crowds, any one member of the crowd-any developer-can change a piece of code without appreciating the possible consequences. The ideal of open source is that many contributors will catch each others' mistakes and biases-but they may also introduce new biases to a piece of software.

That's a worry expressed by Melanie Moses, a professor of computer science at the University of New Mexico and the Santa Fe Institute who has done considerable work on the growing role of AI in the criminal justice system. Algorithms have been used to decide whether a suspect in a crime can be trusted not to jump bail, or whether a convicted criminal is a risk for repeat offenses if only sentenced to probation.

"If software is solidifying, let's say, racial bias in sentencing," she says, "and every time it operates it puts more young black men in jail, and then having been in jail before makes them more likely to be put in jail again-that's a dangerous positive feedback."

Which brings us back to the policymakers who, in Engler's view, need to pay more attention to the ways in which open source is shaping the future of AI.

"One of the scary parts of open-source AI is how intensely easy it is to use," he says. "The barrier is so low... that almost anyone who has a programming background can figure out how to do it, even if they don't understand, really, what they're doing."

Perhaps, says Moses in New Mexico, AI doesn't simply need heavy-handed lawmakers or regulators; standards organizations could recommend better practices. But there needs to be something in place as the pace of AI development increases. If an open-source algorithm is flawed, it is harder to undo the damage than if the software came from one proprietary-and accountable-company.

"The software is out there, it's been copied, it's in multiple places, and there's no mechanism to stop using something that's known to be biased," she says. "You can't put the genie back in the bottle."

ETfl0CyOrBY
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments