Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

Most Recruiters Have Little Confidence in Ability to Assess Entry-Level Applicants

Most Recruiters Have Little Confidence in Ability to Assess Entry-Level Applicants

An illuminating new survey of recruitment professionals conducted by Mercer and the Society for Human Resource Management finds that only 20 percent are fully confident in their organizations’ ability to assess the skills of candidates for entry-level positions using traditional methods such as interviewing or reading applications and résumés. SHRM’s Roy Maurer elaborates on the findings:

Most employers use in-person interviews (95 percent), application reviews (87 percent) and resume reviews (86 percent), but nearly one-half of respondents said they have “little or no confidence” in application and resume reviews.

“Since application and resume reviews are typically the first line of screening for job applicants, many candidates never even get to the interview,” said Barb Marder, a senior partner at global consultancy Mercer. Respondents expressed much more confidence in using in-person interviews to assess candidates. Marder added that entry-level applicants without any work experience often have trouble getting past the review phase because HR dismisses them for lack of experience.

Read more