The AI problem we can’t ignore
In August 2020, as the pandemic confined people to their homes, the U.K. canceled A-level exams and turned to an algorithm to calculate grades, key for university admissions. Based on historical data that reflected the resource advantages of private schools, the algorithm disproportionately downgraded state students. Those who attended private schools, meanwhile, received inflated grades. News of the results set off widespread backlash. The system reinforced social inequities, critics said.
This isn't just a one-off mistake - it's a sign of AI bias creeping into our lives, according to Gemma Galdon-Clavell, a tech policy expert and one of Mozilla's 2025 Rise25 honorees. Whether it's deciding who gets into college or a job, who qualifies for a loan, or how health care is distributed, bias in AI can set back efforts toward a more equitable society.
In an opinion piece for Context by the Thomson Reuters Foundation, Gemma asks us to consider the consequences of not addressing this issue. She argues that bias and fairness are the biggest yet often overlooked threats of AI. You can read her essay here.
We chatted with Gemma about her piece below.
Can you give examples of how AI is already affecting us?AI is involved in nearly everything - whether you're applying for a job, seeing a doctor, or applying for housing or benefits. Your resume might be screened by an AI, your wait time at the hospital could be determined by an AI triage system, and decisions about loans or mortgages are often assisted by AI. It's woven into so many aspects of decision-making, but we don't always see it.
Why is bias in AI so problematic?AI systems look for patterns and then replicate them. These patterns are based on majority data, which means that minorities - people who don't fit the majority patterns - are often disadvantaged. Without specific measures built into AI systems to address this, they will inevitably reinforce existing biases. Bias is probably the most dangerous technical challenge in AI, and it's not being tackled head-on.
How can we address these issues?At Eticas, we build software to identify outliers - people who don't fit into majority patterns. We assess whether these outliers are relevant and make sure they aren't excluded from positive outcomes. We also run a nonprofit that helps communities affected by biased AI systems. If a community feels they've been negatively impacted by an AI system, we work with them to reverse-engineer it, helping them understand how it works and giving them the tools to advocate for fairer systems.
What can someone do if an AI system affects them, but they don't fully understand how it works?Unfortunately, not much right now. Often, people don't even know an AI system made a decision about their lives. And there aren't many mechanisms in place for contesting those decisions. It's different from buying a faulty product, where you have recourse. If AI makes a decision you don't agree with, there's very little you can do. That's one of the biggest challenges we need to address - creating systems of accountability for when AI makes mistakes.
You've highlighted the challenges. What gives you hope about the future of AI?The progress of our work on AI auditing! For years now we've been showing how there is an alternative AI future, one where AI products are built with trust and safety at heart, where AI audits are seen as proof of responsibility and accountability - and ultimately, safety. I often mention how my work is to build the seatbelts of AI, the pieces that make innovation safer and better. A world where we find non-audited AI as unthinkable as cars without seatbelts or brakes, that's an AI future worth fighting for.
The post The AI problem we can't ignore appeared first on The Mozilla Blog.