Tips for applying an intersectional framework to AI development
By now, most of us in tech know that the inherent bias we possess as humans creates an inherent bias in AI applications - applications that have become so sophisticated they're able to shape the nature of our everyday lives and even influence our decision-making.
The more prevalent and powerful AI systems become, the sooner the industry must address questions like: What can we do to move away from using AI/ML models that demonstrate unfair bias?
How can we apply an intersectional framework to build AI for all people, knowing that different individuals are affected by and interact with AI in different ways based on the converging identities they hold?
Start with identifying the variety of voices that will interact with your model.Intersectionality: What it means and why it matters
Before tackling the tough questions, it's important to take a step back and define intersectionality." A term defined by Kimberle Crenshaw, it's a framework that empowers us to consider how someone's distinct identities come together and shape the ways in which they experience and are perceived in the world.
This includes the resulting biases and privileges that are associated with each distinct identity. Many of us may hold more than one marginalized identity and, as a result, we're familiar with the compounding effect that occurs when these identities are layered on top of one another.
At The Trevor Project, the world's largest suicide prevention and crisis intervention organization for LGBTQ youth, our chief mission is to provide support to each and every LGBTQ young person who needs it, and we know that those who are transgender and nonbinary and/or Black, Indigenous, and people of color face unique stressors and challenges.
So, when our tech team set out to develop AI to serve and exist within this diverse community - namely to better assess suicide risk and deliver a consistently high quality of care - we had to be conscious of avoiding outcomes that would reinforce existing barriers to mental health resources like a lack of cultural competency or unfair biases like assuming someone's gender based on the contact information presented.
Though our organization serves a particularly diverse population, underlying biases can exist in any context and negatively impact any group of people. As a result, all tech teams can and should aspire to build fair, intersectional AI models, because intersectionality is the key to fostering inclusive communities and building tools that serve people from all backgrounds more effectively.
Doing so starts with identifying the variety of voices that will interact with your model, in addition to the groups for which these various identities overlap. Defining the opportunity you're solving is the first step because once you understand who is impacted by the problem, you can identify a solution. Next, map the end-to-end experience journey to learn the points where these people interact with the model. From there, there are strategies every organization, startup and enterprise can apply to weave intersectionality into every phase of AI development - from training to evaluation to feedback.
Datasets and trainingThe quality of a model's output relies on the data on which it's trained. Datasets can contain inherent bias due to the nature of their collection, measurement and annotation - all of which are rooted in human decision-making. For example, a 2019 study found that a healthcare risk-prediction algorithm demonstrated racial bias because it relied on a faulty dataset for determining need. As a result, eligible Black patients received lower risk scores in comparison to white patients, ultimately making them less likely to be selected for high-risk care management.
Fair systems are built by training a model on datasets that reflect the people who will be interacting with the model. It also means recognizing where there are gaps in your data for people who may be underserved. However, there's a larger conversation to be had about the overall lack of data representing marginalized people - it's a systemic problem that must be addressed as such, because sparsity of data can obscure both whether systems are fair and whether the needs of underrepresented groups are being met.
To start analyzing this for your organization, consider the size and source of your data to identify what biases, skews or mistakes are built-in and how the data can be improved going forward.
The problem of bias in datasets can also be addressed by amplifying or boosting specific intersectional data inputs, as your organization defines it. Doing this early on will inform your model's training formula and help your system stay as objective as possible - otherwise, your training formula may be unintentionally optimized to produce irrelevant results.
At The Trevor Project, we may need to amplify signals from demographics that we know disproportionately find it hard to access mental health services, or for demographics that have small sample sizes of data compared to other groups. Without this crucial step, our model could produce outcomes irrelevant to our users.
EvaluationModel evaluation is an ongoing process that helps organizations respond to ever-changing environments. Evaluating fairness began with looking at a single dimension - like race or gender or ethnicity. The next step for the tech industry is figuring out how to best compare intersectional groupings to evaluate fairness across all identities.
To measure fairness, try defining intersectional groups that could be at a disadvantage and the ones that may have an advantage, and then examine whether certain metrics (for example, false-negative rates) vary among them. What do these inconsistencies tell you? How else can you further examine which groups are underrepresented in a system and why? These are the kinds of questions to ask at this phase of development.
Developing and monitoring a model based on the demographics it serves from the start is the best way for organizations to achieve fairness and alleviate unfair bias. Based on the evaluation outcome, a next step might be to purposefully overserve statistically underrepresented groups to facilitate training a model that minimizes unfair bias. Since algorithms can lack impartiality due to societal conditions, designing for fairness from the outset helps ensure equal treatment of all groups of individuals.
Feedback and collaborationTeams should also have a diverse group of people involved in developing and reviewing AI products - people who are diverse not only in identities, but also in skillset, exposure to the product, years of experience and more. Consult stakeholders and those who are impacted by the system for identifying problems and biases.
Lean on engineers when brainstorming solutions. For defining intersectional groupings, at The Trevor Project, we worked across the teams closest to our crisis-intervention programs and the people using them - like Research, Crisis Services and Technology. And reach back out to stakeholders and people interacting with the system to collect feedback upon launch.
Ultimately, there isn't a one-size-fits-all" approach to building intersectional AI. At The Trevor Project, our team has outlined a methodology based on what we do, what we know today and the specific communities we serve. This is not a static approach and we remain open to evolving as we learn more. While other organizations may take a different approach to build intersectional AI, we all have a moral responsibility to construct fairer AI systems, because AI has the power to highlight - and worse, magnify - the unfair biases that exist in society.
Depending on the use case and community in which an AI system exists, the magnification of certain biases can result in detrimental outcomes for groups of people who may already face marginalization. At the same time, AI also has the ability to improve quality of life for all people when developed through an intersectional framework. At The Trevor Project, we strongly encourage tech teams, domain experts and decision-makers to think deeply about codifying a set of guiding principles to initiate industry-wide change - and to ensure future AI models reflect the communities they serve.
Increasing diversity in tech hiring requires a common-ground approach