AI Recruiting Tools Aim to Reduce Bias in the Hiring Process
Two years ago, Amazon reportedly scrapped a secret artificial intelligence hiring tool after realizing that the system had learned to prefer male job candidates while penalizing female applicants-the result of the AI training on resumes that mostly male candidates had submitted to the company. The episode raised concerns over the use of machine learning in hiring software that would perpetuate or even exacerbate existing biases.
Now, with the Black Lives Matter movement spurring new discussions about discrimination and equity issues within the workforce, a number of startups are trying to show that AI-powered recruiting tools can in fact play a positive role in mitigating human bias and help make the hiring process fairer.
These companies claim that, with careful design and training of their AI models, they were able to specifically address various sources of systemic bias in the recruitment pipeline. It's not a simple task: AI algorithms have a long history of being unfair regarding gender, race, and ethnicity. The strategies adopted by these companies include scrubbing identifying information from applications, relying on anonymous interviews and skillset tests, and even tuning the wording of job postings to attract as diverse a field of candidates as possible.
One of these firms is GapJumpers, which offers a platform for applicants to take blind auditions" designed to assess job-related skills. The startup, based in San Francisco, uses machine learning to score and rank each candidate without including any personally identifiable information. Co-founder and CEO Kedar Iyer says this methodology helps reduce traditional reliance on resumes, which as a source of training data is riddled with bias," and avoids unwittingly replicating and propagating such biases through the scaled-up reach of automated recruiting.
That deliberate approach to reducing discrimination may be encouraging more companies to try AI-assisted recruiting. As the Black Lives Matter movement gained widespread support, GapJumpers saw an uptick in queries from potential clients. We are seeing increased interest from companies of all sizes to improve their diversity efforts," Iyers says.
AI with humans in the loopAnother lesson from Amazon's gender-biased AI is that paying close attention to the design and training of the system is not enough: AI software will almost always require constant human oversight. For developers and recruiters, that means they cannot afford to blindly trust the results of AI-powered tools-they need to understand the processes behind them, how different training data affects their behavior, and monitor for bias.
One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others," says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. You're talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term."
Payton and her colleagues highlighted several companies-including GapJumpers-that take an intentional design justice" approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review.
According to the paper's authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI's role in the hiring process, there is a need for humans to have the capability to evaluate the system's decisions and possibly override them.
I believe that human-in-the-loop should not be at the end of the recommendation that the algorithms suggest," Payton says. Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization."
Human-in-the-loop should not be at the end of the recommendation that the algorithms suggest. Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization." -Fay Cobb Payton, North Carolina State UniversityEach stage of an AI system's decision point should allow for an auditing process where humans can check the results, Payton adds. And of course, it's crucial to have a separation of duties so that the humans auditing the system are not the same as those who designed the system in the first place.
When we talk about bias, there are so many nuances and spots along this talent acquisition process where bias and bias mitigation come into play," says Lynette Yarger, a professor of information sciences and technology at Pennsylvania State University and lead author on the paper with Payton. She added that those companies that are trying to mitigate these biases are interesting because they're trying to push human beings to be accountable."
Another example highlighted by Yarger and Payton is a Seattle-based startup called Textio that has trained its AI systems to analyze job advertisements and predict their ability to attract a diverse array of applicants. Textio's Tone Meter" can help companies offer job listings with more inclusive language: Phrases like rock star" that attract more male job seekers could be swapped out for the software's suggestion of high performer" instead.
We use Textio for our own recruiting communication and have from the beginning," says Kieran Snyder, CEO and co-founder of Textio, which is based in Seattle. But perhaps because we make the software, we know that Textio on its own is not the whole solution when it comes to building an equitable organization-it's just one piece of the puzzle."
Indeed, many tech companies, including those that develop AI-powered hiring tools, are still working on inclusion and equity. Enterprise software company Workday, founded by former PeopleSoft executives and headquartered in Pleasanton, Calif., has more than 3,700 employees worldwide and clients that include half the Fortune 100. During a company forum on diversity and racial bias in June, Workday acknowledged that Black employees make up just 2.4 percent of its U.S. workforce versus the average of 4.4 percent for Silicon Valley firms, according to SearchHRSoftware, a human resources technology news site.
AI hiring tools: not a quick fixAnother challenge for AI-powered recruiting tools is that some customers expect them to offer a quick fix to a complex problem, when in reality that is not the case. James Doman-Pipe, head of product marketing at Headstart, a recruiting software startup based in London, says any business interested in reducing discrimination with AI or other technologies will need significant buy-in from the leadership and other parts of the organization.
Headstart's software uses machine learning to evaluate job applicants and generate a match score" that shows how well the candidates fit with a job's requirements for skills, education, and experience. By generating a match score, recruiters are more likely to consider underprivileged and underrepresented minorities to move forward in the recruiting process," Doman-Pipe says. The company claims that in tests comparing the AI-based approach to traditional recruiting methods, clients using its software saw a significant improvements in the diversity makeup of new hires.
Still, one of the greatest obstacles AI-powered recruiting tools face before they can gain widespread trust is the lack of public data showing how different tools can help-or hinder-efforts to making tech hiring more equitable.
I do know from interviews with software companies that they do audit, and they can go back and recalibrate their systems," Yarger, the Pennsylvania State University professor, says. But the effectiveness of efforts to improve algorithmic equity in recruitment remain unclear. She explains that many companies remain reluctant to publicly share such information because of liability issues surrounding equitable employment and workplace discrimination. Companies using AI tools could face legal consequences if the tools were shown to discriminate against certain groups groups.
For North Carolina State's Payton, it remains to be seen whether corporate commitments to addressing diversity and racial bias will have a broader and lasting impact in the hiring and retention of tech workers-and whether or not AI can prove significant in helping to create equitable an workforce.
Association and confirmation biases and networks that are built into the system, those don't change overnight," she says. So there's much work to be done."