As machine learning algorithms are called upon to make more decisions for organizations, including talent decisions like recruiting and assessment, it’s becoming even more crucial to make sure that the performance of these algorithms is regularly monitored and reviewed just like the performance of an employee. While automation has been held up as a way to eliminate errors of human judgment from bias-prone processes like hiring, in reality, algorithms are only as good as the data from which they learn, and if that data contains biases, the algorithm will learn to emulate those biases.
The risk of algorithmic bias is a matter of pressing concern for organizations taking the leap into AI- and machine learning-enhanced HR processes. The most straightforward solution to algorithmic bias is to rigorously scrutinize the data you are feeding your algorithm and develop checks against biases that might arise based on past practices. Diversifying the teams that design and deploy these algorithms can help ensure that the organization is sensitive to the biases that might arise. As large technology companies make massive investments in these emerging technologies, they are also becoming aware of these challenges and looking for technological solutions to the problem as well. At Fast Company last week, Adele Peters took a look at Accenture’s new Fairness Tool, a program “designed to quickly identify and then help fix problems in algorithms”:
The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. “In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference between how correct the model is for, say, a white man versus a black woman,” [Rumman Chowdhury, Accenture’s global responsible AI lead,] says.
The tool also looks for variables that are related to other sensitive variables. An algorithm might not explicitly consider gender, for example, but if it looks at income, it could easily have different outcomes for women versus men. (The tool calls this relationship “mutual information.”) It also looks at error rates for each variable, and whether errors are higher for one group rather than another. After this analysis, the tool can fix the algorithm–but since making this type of correction can make the algorithm less accurate, it shows what the trade-off would be, and lets the user decide how much to change.
Microsoft is also developing an algorithmic bias detector, as is Facebook, Will Knight reported at the MIT Tech Review last month:
Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard … says Microsoft’s bias-catching product will help AI researchers catch more instances of unfairness, although not all. “Of course, we can’t expect perfection—there’s always going to be some bias undetected or that can’t be eliminated—the goal is to do as well as we can,” he says.
“The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect,” Caruna adds.
Facebook revealed a similar tool at its developer conference in early May, Knight added. The emergence of these tools is encouraging sign that the tech sector is taking the problem of algorithmic bias seriously. While these tools are useful for researchers and for organizations developing algorithms internally, many organizations are buying their algorithms from outside vendors and may have a limited understanding of how they work. In that case, it is important for leaders to educate themselves about how algorithmic bias happens and hold vendors accountable for working proactively to mitigate it, beyond just conducting adverse impact tests on the final product.