Are We Building Biased Machines?

Are We Building Biased Machines?

Machine learning and AI are widely seen as tools that can help organizations weed out unconscious bias from their recruiting and performance management processes. Nonetheless, TechCrunch’s Yvonne Baur argues, the programmers developing these new technologies are human, too, and need to be cognizant of how their own biases can infect the tools they build:

Let’s look at Google’s word2vec, for example. Using a millions-large set of Google News data, Google researchers extracted patterns of words that are related to each other. By representing the terms in a vector space, they were able to deduct relationships between words with simple vector algebra. For instance, the system can answer questions such as “sister is to woman as brother is to what?” (sister:woman :: brother:?) correctly with “man.” But therein lies the challenge of these rules: Because the system is trained with existing news, it will also follow the very bias in those articles. And in the Google News set, those articles proved to be shockingly biased. For instance, if you enter “Father:doctor :: mother:?” it answers “nurse.” For “man: computer programmer :: woman:?” it will give you “homemaker.”

So, does this mean machine learning is sexist? No. But this example of machine learning ruthlessly exposes the bias that still exists in our journalism and journalists today. Statistically, the statements are correct using just what can be derived from the articles. But the articles themselves are obviously biased.

Similarly, if a bias exists in your organization, whether in the way people are hired, or developed, or promoted, just taking the existing data as a basis for your machine learning may actually achieve the opposite of what you are trying to achieve — meaning it can reinforce and amplify bias instead of eliminating it. If you have always promoted men, the system may well see being a man as a predictor of someone getting promoted.

Personally, I find this possibility frightening. If human biases creep into algorithms and machine learning, then they are not serving diversity and inclusion efforts, but rather perpetuating the cycle of bias.

To me, this points to the importance of hiring programmers who value diversity and are aware of their implicit bias and willing to resist it. This problem could also be solved by making sure that the teams programming these machines and creating these algorithms are diverse themselves, so that different view points and biases can be recognized.

It’s just another example of how much work businesses still have to do in tackling the problem of unconscious bias. In our recent webinar on overcoming bias to advance the underrepresented workforce, professors Stephanie Johnson and David Hekman explained some additional biases that can impede diversity initiatives and suggested ways of dealing with those biases to effectively promote diversity in an organization. CEB Diversity and Inclusion Leadership Council members can watch a replay of the webinar here.