Article 47Q02 Yes, “algorithms” can be biased. Here’s why

Yes, “algorithms” can be biased. Here’s why

by
Ars Staff
from Ars Technica - All content on (#47Q02)
GettyImages-527846436-800x533.jpg

Enlarge / Seriously, it's enough to make researchers cry. (credit: Getty | Peter M Fisher)

Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches "networks, security, and why the two don't get along." He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.

Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."

She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased-but she's basically right. Let's take a look at why.

Read 23 remaining paragraphs | Comments

index?i=c7UDjk_Jw1s:FQwRpt3Kq2I:V_sGLiPB index?i=c7UDjk_Jw1s:FQwRpt3Kq2I:F7zBnMyn index?d=qj6IDK7rITs index?d=yIl2AUoC8zA
External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments