Article 64QZW 6 Reactions to the White House’s AI Bill of Rights

6 Reactions to the White House’s AI Bill of Rights

by
Eliza Strickland
from IEEE Spectrum on (#64QZW)
red-white-and-blue-dots-and-larger-circl

Last week, the White House put forth its Blueprint for an AI Bill of Rights. It's not what you might think-it doesn't give artificial-intelligence systems the right to free speech (thank goodness) or to carry arms (double thank goodness), nor does it bestow any other rights upon AI entities.

Instead, it's a nonbinding framework for the rights that we old-fashioned human beings should have in relationship to AI systems. The White House's move is part of a global push to establish regulations to govern AI. Automated decision-making systems are playing increasingly large roles in such fraught areas as screening job applicants, approving people for government benefits, and determining medical treatments, and harmful biases in these systems can lead to unfair and discriminatory outcomes.

The United States is not the first mover in this space. The European Union has been very active in proposing and honing regulations, with its massive AI Act grinding slowly through the necessary committees. And just a few weeks ago, the European Commission adopted a separate proposal on AI liability that would make it easier for victims of AI-related damage to get compensation." China also has several initiatives relating to AI governance, though the rules issued apply only to industry, not to government entities.

Although this blueprint does not have the force of law, the choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law."
-Janet Haven, Data & Society Research Institute

But back to the Blueprint. The White House Office of Science and Technology Policy (OSTP) first proposed such a bill of rights a year ago, and has been taking comments and refining the idea ever since. Its five pillars are:

  1. The right to protection from unsafe or ineffective systems, which discusses predeployment testing for risks and the mitigation of any harms, including the possibility of not deploying the system or removing a system from use";
  2. The right to protection from algorithmic discrimination;
  3. The right to data privacy, which says that people should have control over how data about them is used, and adds that surveillance technologies should be subject to heightened oversight";
  4. The right to notice and explanation, which stresses the need for transparency about how AI systems reach their decisions; and
  5. The right to human alternatives, consideration, and fallback, which would give people the ability to opt out and/or seek help from a human to redress problems.

For more context on this big move from the White House, IEEE Spectrum rounded up six reactions to the AI Bill of Rights from experts on AI policy.

The Center for Security and Emerging Technology, at Georgetown University, notes in its AI policy newsletter that the blueprint is accompanied by a technical companion" that offers specific steps that industry, communities, and governments can take to put these principles into action. Which is nice, as far as it goes:

But, as the document acknowledges, the blueprint is a non-binding white paper and does not affect any existing policies, their interpretation, or their implementation. When OSTP officials announced plans to develop a bill of rights for an AI-powered world" last year, they said enforcement options could include restrictions on federal and contractor use of noncompliant technologies and other laws and regulations to fill gaps." Whether the White House plans to pursue those options is unclear, but affixing Blueprint" to the AI Bill of Rights" seems to indicate a narrowing of ambition from the original proposal.

Americans do not need a new set of laws, regulations, or guidelines focused exclusively on protecting their civil liberties from algorithms.... Existing laws that protect Americans from discrimination and unlawful surveillance apply equally to digital and non-digital risks."
-Daniel Castro, Center for Data Innovation

Janet Haven, executive director of the Data & Society Research Institute, stresses in a Medium post that the blueprint breaks ground by framing AI regulations as a civil-rights issue:

The Blueprint for an AI Bill of Rights is as advertised: it's an outline, articulating a set of principles and their potential applications for approaching the challenge of governing AI through a rights-based framework. This differs from many other approaches to AI governance that use a lens of trust, safety, ethics, responsibility, or other more interpretive frameworks. A rights-based approach is rooted in deeply held American values-equity, opportunity, and self-determination-and longstanding law....

While American law and policy have historically focused on protections for individuals, largely ignoring group harms, the blueprint's authors note that the magnitude of the impacts of data-driven automated systems may be most readily visible at the community level." The blueprint asserts that communities-defined in broad and inclusive terms, from neighborhoods to social networks to Indigenous groups-have the right to protection and redress against harms to the same extent that individuals do.

The blueprint breaks further ground by making that claim through the lens of algorithmic discrimination, and a call, in the language of American civil-rights law, for freedom from" this new type of attack on fundamental American rights. Although this blueprint does not have the force of law, the choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law.

At the Center for Data Innovation, director Daniel Castro issued a press release with a very different take. He worries about the impact that potential new regulations would have on industry:

The AI Bill of Rights is an insult to both AI and the Bill of Rights. Americans do not need a new set of laws, regulations, or guidelines focused exclusively on protecting their civil liberties from algorithms. Using AI does not give businesses a get out of jail free" card. Existing laws that protect Americans from discrimination and unlawful surveillance apply equally to digital and non-digital risks. Indeed, the Fourth Amendment serves as an enduring guarantee of Americans' constitutional protection from unreasonable intrusion by the government.

Unfortunately, the AI Bill of Rights vilifies digital technologies like AI as among the great challenges posed to democracy." Not only do these claims vastly overstate the potential risks, but they also make it harder for the United States to compete against China in the global race for AI advantage. What recent college graduates would want to pursue a career building technology that the highest officials in the nation have labeled dangerous, biased, and ineffective?

What I would like to see in addition to the Bill of Rights are executive actions and more congressional hearings and legislation to address the rapidly escalating challenges of AI as identified in the Bill of Rights."
-Russell Wald, Stanford Institute for Human-Centered Artificial Intelligence

The executive director of the Surveillance Technology Oversight Project (S.T.O.P.), Albert Fox Cahn, doesn't like the blueprint either, but for opposite reasons. S.T.O.P.'s press release says the organization wants new regulations and wants them right now:

Developed by the White House Office of Science and Technology Policy (OSTP), the blueprint proposes that all AI will be built with consideration for the preservation of civil rights and democratic values, but endorses use of artificial intelligence for law-enforcement surveillance. The civil-rights group expressed concern that the blueprint normalizes biased surveillance and will accelerate algorithmic discrimination.

We don't need a blueprint, we need bans," said Surveillance Technology Oversight Project executive director Albert Fox Cahn. When police and companies are rolling out new and destructive forms of AI every day, we need to push pause across the board on the most invasive technologies. While the White House does take aim at some of the worst offenders, they do far too little to address the everyday threats of AI, particularly in police hands."

Another very active AI oversight organization, the Algorithmic Justice League, takes a more positive view in a Twitter thread:

Today's #WhiteHouse announcement of the Blueprint for an AI Bill of Rights from the @WHOSTP is an encouraging step in the right direction in the fight toward algorithmic justice.... As we saw in the Emmy-nominated documentary @CodedBias," algorithmic discrimination further exacerbates consequences for the excoded, those who experience #AlgorithmicHarms. No one is immune from being excoded. All people need to be clear of their rights against such technology. This announcement is a step that many community members and civil-society organizations have been pushing for over the past several years. Although this Blueprint does not give us everything we have been advocating for, it is a road map that should be leveraged for greater consent and equity. Crucially, it also provides a directive and obligation to reverse course when necessary in order to prevent AI harms.

Finally, Spectrum reached out to Russell Wald, director of policy for the Stanford Institute for Human-Centered Artificial Intelligence for his perspective. Turns out, he's a little frustrated:

While the Blueprint for an AI Bill of Rights is helpful in highlighting real-world harms automated systems can cause, and how specific communities are disproportionately affected, it lacks teeth or any details on enforcement. The document specifically states it is non-binding and does not constitute U.S. government policy." If the U.S. government has identified legitimate problems, what are they doing to correct it? From what I can tell, not enough.

One unique challenge when it comes to AI policy is when the aspiration doesn't fall in line with the practical. For example, the Bill of Rights states, You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter." When the Department of Veterans Affairs can take up to three to five years to adjudicate a claim for veteran benefits, are you really giving people an opportunity to opt out if a robust and responsible automated system can give them an answer in a couple of months?

What I would like to see in addition to the Bill of Rights are executive actions and more congressional hearings and legislation to address the rapidly escalating challenges of AI as identified in the Bill of Rights.

It's worth noting that there have been legislative efforts on the federal level: most notably, the 2022 Algorithmic Accountability Act, which was introduced in Congress last February. It proceeded to go nowhere.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments