Article 2YNR2 Rise of the racist robots – how AI is learning all our worst impulses

Rise of the racist robots – how AI is learning all our worst impulses

by
Stephen Buranyi
from on (#2YNR2)

There is a saying in computer science: garbage in, garbage out. When we feed machines data that reflects our prejudices, they mimic them - from antisemitic chatbots to racially biased software. Does a horrifying future await people forced to live at the mercy of algorithms?

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend - wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://feeds.theguardian.com/theguardian/science/rss
Feed Title
Feed Link http://feeds.theguardian.com/
Reply 0 comments