Facebook says it will look for racial bias in its algorithms
The news: Facebook says it is setting up new internal teams to look for racial bias in the algorithms that drive its main social network and Instagram, according to the Wall Street Journal. In particular, the investigations will address the adverse effects of machine learning-which can encode implicit racism in training data-on Black, Hispanic, and other minority groups.
Why it matters: In the last few years, increasing numbers of researchers and activists have highlighted the problem of bias in AI and the disproportionate impact it has on minorities. Facebook, which uses machine learning to curate the daily experience of its 2.5 billion users, is well overdue for an internal assessment of this kind. There is already evidence that Facebook's ad-serving algorithms discriminate by race and allow advertisers to stop specific racial groups from seeing their ads, for example.
Under pressure: Facebook has a history of dodging accusations of bias in its systems. It has taken several years of bad press and pressure from civil rights groups to get to this point. Facebook has set up these teams after a month-long advertising boycott organized by civil rights groups-including the Anti-Defamation League, Color of Change, and the NAACP-that led big spenders like Coca-Cola, Disney, McDonald's, and Starbucks to suspend their campaigns.
No easy fix: The move is welcome. But launching an investigation is a far cry from actually fixing the problem of racial bias, especially when nobody really knows how to fix it. In most cases, bias exists in the training data and there are no good agreed-on ways to remove it. And adjusting that data-a form of algorithmic affirmative action-is controversial. Machine-learning bias is also just one of social media's problems around race. If Facebook is going to look at its algorithms, it should be part of a wider overhaul that also grapples with policies that give platforms to racist politicians, white-supremacist groups, and Holocaust deniers.
We will continue to work closely with Facebook's Responsible AI team to ensure we are looking at potential biases across our respective platforms," says Stephanie Otway, a spokesperson for Instagram. It's early days and we plan to share more details on this work in the coming months."