Use AI responsibly to uplift historically disenfranchised people during COVID-19
- Using AI responsibly to fight the coronavirus pandemic
- The American AI Initiative: A good first step, of many
One of the most distressing aspects of the ongoing pandemic is that COVID-19 is having a disproportionate impact on communities of color and lower-income Americans due to structural factors rooted in history and long-standing societal biases.
Those most at risk during this pandemic are 24 million of the lowest-income workers; the people who have less job security and can't work from home. In fact, only 9.2% of the bottom 25% have the ability to work from home. Compare that to the 61.5% of the top 25% and the disparity is staggering. Additionally, people in these jobs typically do not have the financial security to avoid public interaction by stockpiling food and household goods, buying groceries online or avoiding public transit. They cannot self-isolate. They need to venture out far more than other groups, heightening their risk of infection.
The historically disadvantaged will also be hit the hardest by the economic impacts of the pandemic. They are overrepresented in the industries experiencing the worst downturn. The issues were atrocious prior to COVID-19, with the typical Black and Latinx households having a net worth of just $17,100 and $20,765, respectively, compared with the $171,000 held by the typical white household. An extended health and economic crisis will only exacerbate these already extreme disparities.
AI as a beacon of hopeA rare encouraging aspect of the ongoing pandemic response is the use of cutting-edge technology - especially AI - to address everything from supply chains to early-stage vaccine research.
The potential of human + AI exceeds the potential of humans working alone by far, but there are tremendous risks that require careful consideration. AI requires massive amounts of data, but ingrained in that data are the societal imperfections and inequities that have given rise to disproportionate health and financial impacts in the first place.
In short, we cannot use a tool until we know it works and understand the potential for unintended consequences. Some health groups hurried to repurpose existing AI models to help track patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of COVID-19, but many of those tools have struggled with bias and accuracy issues. Balancing the instinct to help now" and the risks of unforeseen consequences" amidst the high stakes of the COVID-19 pandemic is why the responsible use of AI is more important now than ever.
4 ways to purposefully and responsibly use AI to combat COVID-191. Avoid delegating to algorithms that run critical systems
Think of an AI system designed to distribute ventilators and medical equipment to hospitals with the objective of maximizing survival rates. Disadvantaged populations have higher comorbidities and thus may be less likely to receive supplies if the system is not properly designed. If these preexisting prejudices are not accounted for when designing the AI system, then well-intentioned efforts could result in directing supplies away from especially vulnerable communities.
Artificial intelligence is also being used to improve supply chains across all sectors. The Joint Artificial Intelligence Center is prototyping AI that can track data on ventilators, PPE, medical supplies and food. When the goal is to anticipate panic-buying and ensure health care professionals have access to the equipment they need, this is a responsible use of AI.
As these examples illustrate, we can quickly arrive at a problematic use case when decision-making authority is delegated to an algorithm. The best and most responsible use of AI is maximizing efficiency to ensure the necessary supplies get to those truly in need. Previous failures in AI show the need for healthy skepticism when delegating authority on potentially life-and-death decisions to an algorithm.
2. Be wary of disproportional impacts and singling out specific communities
Think of an AI system that uses mobility data to detect localized communities that are violating stay-at-home orders and route police for additional enforcement. Disadvantaged populations do not have the economic means to stockpile food and other supplies, or order delivery, forcing them to go outside. As we mentioned earlier, being overrepresented in frontline sectors means leaving the home more frequently. In addition, individuals and families experiencing homelessness could be targeted for violating stay-at-home enforcement. In New York City, police enforcement of stay-at-home directives has disproportionately targeted Black and Latinx residents. Here is where responsible AI steps in. AI systems should be designed not to punish these populations with police enforcement, but rather help identify the root causes and route additional food and resources. This is not a panacea, but will avoid exacerbating existing challenges.
Israel has already demonstrated that this model works. In mid-March it passed an emergency law enabling the use of mobile data to pinpoint the infected as well as those they had come in contact with. Maccabi Healthcare Services are using AI to ID its most at-risk customers and prioritize them for testing. This is a fantastic example of adopting previously responsible and successful AI by adapting an existing system that was built and trained to identify people most at risk for flu, using millions of records from over 27 years.
3. Establish AI that is human-centric with privacy by design and native controls
Think of an AI system that uses mobile phone apps to track infections and trace contacts in an effort to curb new infections. Minority and economically disadvantaged populations have lower rates of smartphone ownership than other groups. AI systems should take these considerations into account to avoid design bias. This will ensure adequate protections for vulnerable populations, but also improve the overall efficacy of the system since these individuals may have high human contact in their jobs. Ensuring appropriate track and trace takes place within these populations is critically important.
In the U.S., MIT researchers are developing Private Automatic Contact Tracing (PACT), which uses Bluetooth communications for contact tracing while also preserving individual privacy. If you test positive and inform the app, everyone who has been in close proximity to you in the last 14 days gets a notification. Anonymity and privacy are the biggest keys to responsible use of AI to curb the spread of COVID-19.
In India the government's Bridge to Health app uses a phone's Bluetooth and location data to let users know if they have been near a person with COVID-19. But, again, privacy and anonymity are the keys to responsible and ethical use of AI.
This is a place where the true power of human + AI shines through. As these apps are rolled out, it is important that they are paired with human-based track and trace to account for disadvantaged populations. AI allows automating and scaling track and trace for most of the population; humans ensure we help the most vulnerable.
4. Validate systems and base decisions on sanitized representative data
Think of an AI system that helps doctors make rapid decisions on which patients to treat and how to treat them in an overburdened health care system. One such system developed in Wuhan identified biomarkers that correlate with higher survival rates to help doctors pinpoint which patients likely need critical care and which can avoid the hospital altogether.
The University of Chicago Medical Center is working to upgrade an existing AI system called eCART. The system will be enhanced for COVID to use more than 100 variables to predict the need for intubation eight hours in advance. While eight hours may not seem like much, it provides doctors an opportunity to take action before a patient's condition deteriorates.
But, the samples and data sets systems like these rely on could have the potential to produce unreliable outcomes or ones that reinforce existing biases. If the AI is trained on observations of largely white individuals - as was the case with data in the International Cancer Genome Consortium - how willing would you be to delegate life-and-death health care decisions for a nonwhite patient? These are issues that require careful consideration and demonstrate why it is so important to validate not only the systems themselves, but also the data on which they rely.
Questions we must askAs companies, researchers and governments increasingly leverage AI, a parallel discussion around responsible AI is necessary to ensure benefits are maximized while harmful consequences are minimized. We need better guidelines and assessments of AI around fairness, trustworthiness, bias and ethics.
There are dozens of dimensions we should evaluate every use case against to ensure it is developed in a responsible manner. But, these four simple questions provide a great framework to start a discussion between AI system developers and policy makers who may be considering deploying an AI solution to combat COVID-19.
- What are the consequences if the system makes a mistake? Can we redesign the system to minimize this?
- Can we clearly explain how the AI system produced specific outcomes in a way that is understandable to the general public?
- What are potential sources of bias - data, human and design - and how can they be minimized?
- What steps can be taken to protect the privacy of individuals?
Each of these questions will apply in different ways to particular use cases. A natural language processing (NLP) system sifting through tens of thousands of scientific papers that might focus the search for a COVID-19 vaccine poses no direct threat of harm to individuals and performs a task faster that an army of research assistants ever could. Case in point, in April at Harvard the Harvard T.H. Chan School of Public Health and the Human Vaccines Project announced the Human Immunomics Initiative to leverage AI models to accelerate vaccine development.
This is a global effort with scientists around the world working together to expedite drug discovery processes to defeat COVID-19 through the use of AI. From the aforementioned work in the U.S. all the way to Australia where Oracle cloud technology and vaccine technology developed by Vaxine is being leveraged by Flinders University to develop promising vaccine candidates, we can see AI being used for its most ethical purpose, saving human lives.
Another use case is the omnipresent issue facing us during this pandemic: dissemination of misinformation across the planet. Imagine trying to manually filter the posts of the 1.7 billion daily Facebook users every day and scan for misinformation about COVID-19. This is an ideal project for human + AI - with humans confirming cases of misinformation flagged by AI.
This use case is relatively low risk, but its ultimate success depends on human oversight and engagement. That's even more so the case in the high-risk use cases that are grabbing headlines amidst the COVID-19 pandemic. Human + AI is not just a safeguard against a system gone off the rails, it's critical to AI delivering meaningful and impactful results as illustrated through earlier examples.
We need to classify use cases into three buckets to guide our decision making:
1. Red
- Use case represents a decision that should not be delegated to an AI system.
- Using an AI system to decide which patients receive medical treatment during a crisis. This is a case where humans should ultimately be making decisions because of their impact on life-and-death decisions. This has already been recognized by the medical community, where ethical frameworks have been developed to support these very types of decisions.
2. Yellow
- Use case could be deployed responsibly, but it depends upon the design and execution.
- Using an AI system to monitor adherence to quarantine policies. This is a case where use cases may be acceptable depending on design and deployment of systems. For example, using the system to deploy police to neighborhoods to crackdown" on individuals not adhering to quarantine policies would be problematic. But deploying police to these neighborhoods to understand why quarantine is being broken so policy makers can better address citizen needs would be legitimate - provided privacy of individuals is protected.
3. Green
- Use case is low risk and the benefits far outweigh the risks.
- Content filtering on social media platforms to ensure malicious and misleading information regarding COVID-19 is not shared widely.
We must ask our four questions and deliberately analyze the answers we find. We can then responsibly and confidently decide which bucket to put the project into and move forward in a responsible and ethical manner.
A recent U.S. exampleWe recently created Lighthouse, a new dynamic navigation cockpit that helps organizations capture a holistic picture of the ongoing crisis. These lighthouses" are being used to illuminate the multiple dimensions of the situation. For example, we recently partnered with an American city to develop a tool that predicted disruptions in the food supply chain. One data source was based on declines in foot traffic in and around distribution centers. Without accessing any personally identifiable information (PII) - and therefore preserving individual privacy - it shows which parts of the city were most likely to suffer shortages, enabling leaders to respond preemptively and prevent an even worse public health crisis.
This is an easily duplicated process that other organizations can follow to create and implement responsible AI to help the historically disenfranchised navigate and thrive during the age of COVID-19.
Moving forward
When confronting the ethical dilemmas presented by crises like COVID-19, enterprises and organizations equipped with responsible AI programs will be best positioned to offer solutions that protect the most vulnerable and historically disenfranchised groups by respecting privacy, eliminating historical bias and preserving trust. In the rush to help now," we cannot throw responsible AI out the window. In fact, in the age of COVID-19, it is more important than ever before to understand the unintended consequences and long-term effects of the AI systems we create.