How AI is unfairly targeting and discriminating against Black people
The rise of Artificial Intelligence (AI) is here, and it's bringing a new era of technology that is already creating and impacting the world. It was the story of 2023, and its emphasis isn't going anywhere anytime soon.
While the creative growth of AI occurring so rapidly is a fascinating development for our society, it's important to remember its harms that cannot be ignored, especially pertaining to racial bias and discrimination against African-Americans.
In recent years, there has been research revealing that AI technologies have struggled to identify images and speech patterns of nonwhite people. Black AI researchers at tech giants creating AI technology have raised concerns about its harms against the Black community.
The concerns surrounding AI's racial biases and harms against Black people are serious and should be a big focus as 2024 gets underway. We invited University of Michigan professor, Harvard Faculty Associate and former Mozilla Foundation Senior Fellow in Trustworthy AI, Apryl Williams, to dive into this topic further. Williams studies experiences of gender and race at the intersection of digital spaces and algorithmic technocultures, and her most recent book, Not My Type: Automating Sexual Racism in Online Dating," exposes how race-based discrimination is a fundamental part of the most popular and influential dating algorithms.
To start, as a professor, I'm curious to know: How aware do you think students are of the dangers of the technology they're using? Beyond the simple things like screen time notifications they might get, and more about AI problems, misinformation, etc.?
They don't know. I show two key documentaries in my classes every semester. I teach a class called Critical Perspectives on the Internet." And then I have another class that's called Critical AI" and in both of those classes, the students are always shook. They always tell me, You ruin everything for me, I can never look at the world the same," which is great. That's my goal. I hope that they don't look at the world the same when they leave my classes, of course. But I show them Coded Bias" by Shalini Kantayya and when they watched that just this past semester they were like, I can't believe this is legal, like, how are they using facial recognition everywhere? How are they able to do these things on our phones? How do they do this? How do they do that? I can't believe this is legal. And why don't people talk about it?" And I'm like, Well, people do talk about it. You guys just aren't necessarily keyed into the places where people are talking about." And I think that's one of the feelings of sort of like these movements that we're trying to build is that we're not necessarily tapped into the kinds of places young people go to get information.
We often assume that AI machines are neutral in terms of race, but research has shown that some of them are not and can have biases against Black people. When we think about where this problem stems from, is it fair to say it begins with the tech industry's lack of representation of people who understand and can work to address the potential harms of these technologies?
I would say, yes, that is a huge part of it. But the actual starting point is the norms of the tech industry. So we know that the tech industry was created by and large by the military, industrial, complex - like the internet was a military device. And so because of that, a lot of the inequity or like inequality, social injustice of the time that the internet work was created were baked into the structure of the internet. And then, of course, industries that spring up from the internet, right? We know that the military was using the internet for surveillance. And look now, we have in 2024, widespread surveillance of Black communities, of marginalized communities, of undocumented communities, right? So really, it's the infrastructure of the internet that was built to support white supremacy, I would say, is the starting point. And because the infrastructure of the internet and of the tech industry was born from white supremacy, then, yes, we have these hiring practices, and not just the hiring practices, but hiring practices where, largely, they are just hiring the same kinds of people - Cisgender, hetero white men. Increasingly white women, but still we're not seeing the kinds of diversity that we should be seeing if we're going to reach demographic parity. So we have the hiring. But then also, we have just the norms of the tech industry itself that are really built to service, I would say, the status quo, they're not built to disrupt. They're built to continue the norm. And if people don't stop and think about that, then, yeah, we're going to see the replication of all this bias because U.S. society was built on bias, right? Like it is a stratified society inherently. And because of that, we're always going to see that stratification in the tech industry as well.
Issues of bias in AI tend to impact the people who are rarely in positions to develop the technology. How do you think we can enable AI communities to engage in the development and governance of AI to get it where it's working toward creating systems that embrace the full spectrum of inclusion?
Yes, we should enable it. But also the tech industry, like people in these companies, need to sort of take the onus on themselves to reach out to communities in which they are going to deploy their technology, right? So if your target audience, let's say on TikTok, is Black content creators, you need to be reaching out to Black content creators and Black communities before you launch an algorithm that targets those people. You should be having them at your headquarters. You should be doing listening sessions. You should be elevating Black voices. You should be listening to people, right? Listening to the concerns, having support teams in place, before you launch the technology, right? So instead of retroactively trying to Band-aid it when you have an oops or like a bad PR moment, you should be looking to marginalize communities as experts on what they need and how they see technology being implemented in their lives.
A lot of the issues with these technologies in relation to Black people is that they are not designed for Black people - and even the people they are designed for run into problems. It feels like this is a difficult spot for everyone involved?
Yeah, that's an interesting question. I feel like it's really hard for good people on the inside of tech companies to actually say, Hey, this thing that we're building might be generating money, but it's not generating long-term longevity," right? Or health for our users. And I get that - not every tech company is health oriented. They may act like they are, but they're not, like to a lot of them, money is their bottom line. I really think it's up to sort of like movement builders and tech industry shakers to say or to be able to create buy-in for programs, algorithms, ideas, that foster equity. But we have to be able to create buy-in for that. So that might look like, Hey, maybe we might lose some users on this front end when we implement this new idea, but we're going to gain a whole lot more users." Folks of color, marginalized users, queer users, trans users, if they feel like they can trust us, and that's worth the investment, right? So it's really just valuing the whole person, rather than just sort of valuing the face value of the money only or what they think it is, but looking to see the potential of what would happen if people felt like their technology was actually trustworthy.
AI is rapidly growing. What are things we can add to it as it evolves, and what are things we should work to eliminate?
I would say we need to expand our definition of safety. I think that safety should fundamentally include your mental health and well-being, and if the company that you're using it for to find intimacy or to connect with friends is not actually keeping you safe as a person of color, as a trans person, as a queer person, then you can't really have like full mental wellness if you are constantly on high alert, you're constantly in this anxious position, you're having to worry that your technology is exploiting you, right? So, if we're going to have all of this buzz that I'm seeing about trust and safety, that can't just stop at the current discourse that we're having on trust and safety. It can't just be about protecting privacy, protecting data, protecting white people's privacy. That has to include reporting mechanisms for users of color when they encounter abuse. Whether that is racism or homophobia, right? Like it needs to be more inclusive. I would say that the way that we think about trust and safety and automated or algorithmic systems needs to be more inclusive. We really need to widen the definition of safety. And probably the definition of trust also.
In terms of subtracting, they're just a lot of things that we shouldn't be doing, that we're currently doing. Honestly, the thing that we need to subtract the most is this idea that we move fast and break things in tech culture. It's sort of like, we are just moving for the sake of innovation. We might really need to dial back on this idea of moving for the sake of innovation, and actually think about moving towards a safer humanity for everybody, and designing with that goal in mind. We can innovate in a safe way. We might have to sacrifice speed, a nd I think we need to say, it's okay to sacrifice speed in some cases.
When I started to think about the dangers of AI, I immediately remembered the situation with Robert Williams a few years ago, when he was wrongly accused by police that used AI facial recognition. There is more to it than just the strange memes and voice videos people create. What are the serious real world harms that you think of when it comes to Black people and AI that people are overlooking?
I don't know that it's overlooked, but I don't think that Black people are aware of the amount of surveillance of everyday technologies. When you go to the airport, even if you're not using Clear or other facial recognition technology at the airport for expedited security, they're still using facial recognition technology. When you're crossing borders, when you are even flying domestically, they're still using that tech to look at your face. You look into the camera, they take your picture. They compare it to your ID. Like, that is facial recognition technology. I understand that that is for our national safety, but that also means that they're collecting a lot of data on us. We don't know what happens with that data. We don't know if they keep it for 24 hours or if they keep it for 24 years. Are they keeping logs of what your face looks like every time you go? In 50 years, are we going to see a system that's like We've got these TSA files, and we're able to track your aging from the time that you were 18 to the time that you're 50, just based on your TSA data," right? Like, we really don't know what's happening with the data. And that's just one example.
We have constant surveillance, especially in our cars. The smarter our cars get, the more they're surveilling us. We are seeing increasing use of those systems and cars being used, and police cases to see if you were paying attention. Were you talking on your phone? Were you texting and driving? Things like that. There is automation in cars that's designed to identify people and to stop right to avoid hitting you. And as we know, a lot of the systems misidentify Black people as trash cans, and will instead hit them. There are so many instances where AI is part of our life, and I don't think people realize the depth of which it really does drive our lives. And I think that's the thing that scares me the most for people of color is that we don't understand just how much AI is part of our everyday life. And I wish people would stop and sort of think about, yes, I get easy access to this thing, but what am I trading off to get that easy access? What does that mean for me? And what does that mean for my community? We have places like Project Blue light, Project Green Light, where those systems are heavily surveilled in order to protect communities." But are those created to protect white communities at the expense of Black and brown communities? Right? That's what we have to think about when we say that these technologies, especially surveillance technologies, are being used to protect people, who are they protecting? And who are they protecting people from? And is that idea that they're protecting people from a certain group of people realistic? Or is that grounded in some cultural bias that we have.
Looking bigger picture this year: It's an election year and AI will certainly be a large talking point for candidates. Regardless of who wins this fall, in what ways do you think the administration can ensure that policies and enforcement are instilled to address AI to make sure that racial and other inequities don't continue and evolve?
They need to enforce or encourage that tech companies have the onus of transparency on them. There needs to be some kind of legislative prompting, there has to be some kind of responsibility where tech companies actually suffer consequences, legal consequences, economic consequences, when they violate trust with the public, when they extract data without telling people. There also needs to be more two-way conversations. Often tech companies will just tell you, These are the terms of service, you have to agree with them," and if you don't, you opt-out, that means you can't use the tech. There needs to be some kind of system where tech companies can say, Okay, we're thinking about rolling this out or updating our terms of service in this way, how does the community feel about that?" And a way that really they can be accountable to their users. I think we really just need some legislation that makes tech companies sort of put their feet to the fire in terms of them actually having responsibility to their users.
When it comes to fighting against racial biases and struggles, sometimes the most important people that can help create change and bring awareness are those not directly impacted by what's going on - for example, a white person being an ally and protesting for a Black person. What do you think most normal people can do to influence change and bring awareness to AI challenges for Black people?
I would say, for those people who are in the know about what tech companies are doing, talk about that with your kids, right? When you're sitting down and your kids are telling you about something that their friend posted, that's a perfect time to be like, Let's talk about that technology that your friend is using or that you're using." Did you know that on TikTok, this happens? Did you know that on TikTok, often Black creator voices are hidden, or Black content creators are shadow-banned? Did you know what happens on Instagram? These kinds of regular conversations, that way, these kinds of tech injustices are part of the everyday vernacular for kids as they're coming up so that they can be more aware, and also so that they can advocate for themselves and for their communities.
Get Firefox Get the browser that protects what's importantThe post How AI is unfairly targeting and discriminating against Black people appeared first on The Mozilla Blog.