How Can Organizations Avoid Algorithmic Bias?

How Can Organizations Avoid Algorithmic Bias?

Eliminating personal biases from recruiting and performance management is supposed to be one of the advantages artificial intelligence has over human beings when it comes to organizational decision-making. As AI and other new technologies have made their presence felt more and more in the business world over the past few years, several platforms have emerged that promise to remove bias from the hiring process through various technological fixes, and one startup has even created an AI project manager that it says is capable of hiring programmers based solely on the quality of their work.

On the other hand, algorithms and AIs are designed by human beings with biases of their own, and the data fed to machine learning programs to teach them how to think can reflect systemic biases within the society that generated that data. That’s why two researchers at Google and Microsoft have launched a new initiative called AI Now to understand the social implications of an algorithmically-managed world. Will Knight profiles the initiative at the MIT Technology Review:

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products. “It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.” …

A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and they aren’t transparent about how they operate.

The risk of algorithmic bias is important for employers to keep in mind as they explore the new options available for incorporating AI and machine learning in recruiting. Fortunately, if created and deployed wisely, algorithmic recruiting methods can avoid these pitfalls and in fact be less biased than that a process that relies on human judgment, Jean Martin, talent solutions architect at CEB (now Gartner), and Aman Alexander write at Recruiting Trends:

The truth is that the cause of bias in algorithmic assessments is not inherent in the use of algorithms, but rather reflect flawed methodologies during algorithm creation. Organizations will generate the most valuable and objective results if they consider a few critical questions when training the algorithms:

  1. With what data am I focusing the algorithm?
  2. How can I ensure this model will be representative of my employee/applicant base
  3. How can I mitigate the risk of bias or adverse impact on any particular group?

Focusing algorithms on post-hire outcomes, such as career longevity or the likelihood of promotion (as opposed to simply which applicants get hired), removes the potential for mimicking human bias in hiring processes. Using a large historical data set to against which to apply the algorithm ensures the traits of a small, unrepresentative sample are not generalized too broadly. Finally, validating the demographic and gender distribution of algorithmic scores from historical applicant populations can further protect against any potential bias.

When done right, algorithmic assessment can, in fact, reduce bias in hiring processes, at the same time that it improves the efficiency and quality of hiring decisions within an organization.

Additionally, avoiding algorithmic bias is yet another good reason for organizations to care about diversity and inclusion, particularly in their technical workforce. As a White House report noted last year, it is important to ensure that the people designing and deploying AI and machine learning are diverse and representative of the communities that will be affected by the changes these technologies bring about. An inclusive AI team will be sensitive to perspectives and challenges that a less diverse team might not even consider.