Article 6CV9Z Why everyone is mad about New York’s AI hiring law

Why everyone is mad about New York’s AI hiring law

by
Tate Ryan-Mosley
from MIT Technology Review on (#6CV9Z)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Last week, a law about AI and hiring went into effect in New York City, and everyone is up in arms about it. It's one of the first AI laws in the country, and so the way it plays out will offer clues about how AI policy and debate might take shape in other cities. AI hiring regulation is part of the AI Act in Europe, and other states in the US are considering similar bills to New York's.

The use of AI in hiring has been criticized for the way it automates and entrenches existing racial and gender biases. AI systems that evaluate candidates' facial expressions and language have been shown to prioritize white, male, and abled-bodied candidates. The problem is massive, and many companies use AI at least once during the hiring process. US Equal Employment Opportunity Commission chair Charlotte Burrows said in a meeting in January that as many as four out of five companies use automation to make employment decisions.

NYC's Automated Employment Decision Tool law, which came into force on Wednesday, says that employers who use AI in hiring have to tell candidates they are doing so. They will also have to submit to annual independent audits to prove that their systems are not racist or sexist. Candidates will be able to request information from potential employers about what data is collected and analyzed by the technology. Violations will result in fines of up to $1,500.

Proponents of the law say that it's a good start toward regulating AI and mitigating some of the harms and risks around its use, even if it's not perfect. It requires that companies better understand the algorithms they use and whether the technology unfairly discriminates against women or people of color. It's also a fairly rare regulatory success when it comes to AI policy in the US, and we're likely to see more of these specific, local regulations. Sounds sort of promising, right?

But the law has been met with significant controversy. Public interest groups and civil rights advocates say it isn't enforceable and extensive enough, while businesses that will have to comply with it argue that it's impractical and burdensome.

Groups like the Center for Democracy & Technology, the Surveillance Technology Oversight Project (S.T.O.P.), the NAACP Legal Defense and Educational Fund, and the New York Civil Liberties Union argue that the law is underinclusive" and risks leaving out many uses of automated systems in hiring, including systems in which AI is used to screen thousands of candidates.

What's more, it's not clear exactly what independent auditing will achieve, as the auditing industry is currently so immature. BSA, an influential tech trade group whose members include Adobe, Microsoft, and IBM, filed comments to the city in January criticizing the law, arguing that third-party audits are not feasible."

There's a lot of questions about what type of access an auditor would get to a company's information, and how much they would really be able to interrogate about the way it operates," says Albert Fox Cahn, executive director of S.T.O.P. It would be like if we had financial auditors, but we didn't have generally accepted accounting principles, let alone a tax code and auditing rules."

Cahn argues that the law could produce a false sense of security and safety about AI and hiring. This is a fig leave held up as proof of protection from these systems when in practice, I don't think a single company is going to be held accountable because this was put into law," he says.

Importantly, the mandated audits will have to evaluate whether the output of an AI system is biased against a group of people, using a metric called an impact ratio" that determines whether the tech's selection rate" varies across different groups. The audits won't have to seek to ascertain how an algorithm makes a decision, and the law skirts around the explainability" challenges of complex forms of machine learning, like deep learning. As you might expect, that omission is also a hot topic for debate among AI experts.

In the US we're likely to see much more AI regulation of this sort-local laws that take on one particular application of the technology-as we wait for federal legislation. And it's in these local fights that we can understand how AI tools, safety mechanisms, and enforcement are going to be defined in the decades ahead. Already, New Jersey and California are considering similar laws.

(Read more of our coverage onAI and hiring here.)

What I am reading this week
  • Celebrities riding the crypto wave are now dealing with the crash of reality, and some, like Tom Brady, are in hotter water than others. I loved this New York Times feature about the humiliating reckoning facing the actors, athletes and other celebrities who rushed to embrace the easy money and online hype of cryptocurrencies," as authors Erin Griffith and David Yaffe-Bellany write.
  • Russia is vying for more geopolitical control over the internet, writes David Ignatius in his latest Washington Post opinion column. Russia submitted a resolution ahead of the United Nations meeting in Geneva that would address how the internet is governed globally. This wonky space is actually quite interesting to watch, as China and Russia have both tried to rewrite global digital rules.
  • A US federal judge has blocked Biden administration officials from contacting social media sites, saying that their attempts to remove and report online content (such as misinformation) would violate the First Amendment. The ruling, which will most certainly be challenged, feels more like a political play than a meaningful change to content moderation policy.
What I learned this week

Syracuse, New York, is poised for an economic turnaround thanks to a new semiconductor manufacturing facility for chipmaker Micron and a $100 billion investment. The funding is part of President Biden's plan to revitalize domestic industrial policy with the help of tech jobs.

David Rotman, our editor at large, writes, Now Syracuse is about to become an economic test of whether, over the next several decades, the aggressive government policies-and the massive corporate investments they spur-can both boost the country's manufacturing prowess and revitalize regions like upstate New York."

It's a phenomenal story, and I'd highly recommend you take the time to read it this weekend!

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments