AI Alone Won’t Fix Your Hiring Bias

AI Alone Won’t Fix Your Hiring Bias

Numerous technological tools are promising to automate recruiting with the added bonus of helping to eliminate bias from the candidate sourcing and hiring process by using artificial rather than human intelligence to make hiring decisions. AI projects like the Mya chatbot or Project Manager Tara, along with more established companies like Atlassian, SAP, and Hirevue are optimistic that their technology can remove bias, but, as Simon Chandler of Wired points out, “AI is only as good as the data that powers it,” and right now that data is filled with flaws.

There is a great risk in training algorithms with human-generated data because it could program them with the same biases they are hoping to correct. If an algorithm screens applicants based on the traits and characteristics of a company’s current high-performers, the end result will simply be an automated version of the existing biases in the recruiting process. The Atlantic profiled an illustrative example of this where tech startup Gild created a software that helped companies find programming talent. The software collected a lot of publicly-available information to determine a candidate’s likelihood for success, but some of the variables, such as an affinity for a specific website frequented by men, instilled a bias into their rankings. Though it was an indicator of success, using fans of that website as a predictive measure unfairly penalized women.

Our Diversity and Inclusion research team at CEB (now Gartner) has been looking into this challenge of algorithmic bias. Our position, which CEB Diversity and Inclusion Leadership Council members can read in full here, is that the burden of removing this bias is on the people developing the technology, not the end users on the recruiting team.

In order to successfully use technology to aid in removing bias, companies must constantly and rigorously re-evaluate their systems. Correcting the bias of the data may also involve intentionally guiding algorithms to diversity-enabling outcomes in addition to setting checkpoints to test for bias. Those in charge of this initiative must carefully vet each feature of their algorithms to ensure, like in the example of Gild, that none of the variables contain any built-in bias. It will also help to have D&I experts or personnel involved in the development, adoption, and quality control processes to help identify if such variables exist.

The other promise of automation in recruiting is that talent acquisition staff will have more time to focus on strategic matters over menial, time-intensive tasks such as resume reading or phone screens. It is important to make sure some of this freed time is allocated toward better management of diversity recruiting goals and the role of technology in enabling them, rather than perpetuating existing problems.