Several Companies Developing Tools to Address Algorithmic Bias

Several Companies Developing Tools to Address Algorithmic Bias

As machine learning algorithms are called upon to make more decisions for organizations, including talent decisions like recruiting and assessment, it’s becoming even more crucial to make sure that the performance of these algorithms is regularly monitored and reviewed just like the performance of an employee. While automation has been held up as a way to eliminate errors of human judgment from bias-prone processes like hiring, in reality, algorithms are only as good as the data from which they learn, and if that data contains biases, the algorithm will learn to emulate those biases.

The risk of algorithmic bias is a matter of pressing concern for organizations taking the leap into AI- and machine learning-enhanced HR processes. The most straightforward solution to algorithmic bias is to rigorously scrutinize the data you are feeding your algorithm and develop checks against biases that might arise based on past practices. Diversifying the teams that design and deploy these algorithms can help ensure that the organization is sensitive to the biases that might arise. As large technology companies make massive investments in these emerging technologies, they are also becoming aware of these challenges and looking for technological solutions to the problem as well. At Fast Company last week, Adele Peters took a look at Accenture’s new Fairness Tool, a program “designed to quickly identify and then help fix problems in algorithms”:

The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. “In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference between how correct the model is for, say, a white man versus a black woman,” [Rumman Chowdhury, Accenture’s global responsible AI lead,] says.

Read more

Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

Do Your Algorithms Need a Performance Review?

Do Your Algorithms Need a Performance Review?

In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?

Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.

A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.

At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.

Read more

Gartner’s Peter Sondergaard on Technology’s Future in the Workplace

Gartner’s Peter Sondergaard on Technology’s Future in the Workplace

The 2018 World Economic Forum, recently concluded in Davos, Switzerland, brought together political, business, and cultural leaders from around the globe to discuss the future of the global economy and its foremost institutions. Gartner EVP Peter Sondergaard was on hand to take in the events and speak with influencers at the forum, where he observed a few key themes in discussions of the future of the workplace: The increasingly digital nature of business, the rise of artificial intelligence, and the impact technology can have on improving diversity and inclusion.

“It became abundantly clear that organizations have reached the point at which the digital workplace must be driven by both CIOs and heads of HR,” Sondergaard explained. This doesn’t mean technology will eliminate the need for people, just that employees will need to work in different ways and companies will need to offer guidance on how to do that. “Such changes will require new models of learning and development,” he continued, “as well as the creation of hybrid workplaces that combine technology and information to accommodate a mix of employees.”

Certainly, we have seen a wide range of technologies promise to reshape how the people and processes of the workplace operate, but artificial intelligence is the driving force behind the most groundbreaking offerings. It’s powering Google Jobs, wearable tech, analytical tools, and voice-activated tech such as Amazon’s Alexa, as well as the automation of processes from candidate sourcing to performance management. As a result, demand for AI talent has skyrocketed as technology providers are scrambling to keep up with the rapid rate of change.

While the rise of AI has fueled fears of the potential for a massive loss of jobs, Sondergaard is confident that AI should ultimately create jobs if deployed properly. “As was true of the Industrial Revolution,” he also pointed out, “technological advances as a result of AI will spur job creation. In 2020, AI will create 2.3 million jobs, while eliminating 1.8 million — a net growth of half a million new positions. Organizations will realize an added benefit as in 2021 AI augmentation will generate $2.9 trillion of business value and save 6.2 billion hours of worker productivity.”

Read more

How the Workplace Will Change in 2018

How the Workplace Will Change in 2018

Over the past few years, we have witnessed a marked acceleration in the pace of change in the workplace. Each year brings with it new innovations, ideas, and passing fads, as well as social, political, and economic events that affect employers all across the world. 2017 was no exception: Tight labor markets driving competition for talent, concerns over automation and displacement amid the growing embrace of new technologies, the first year of the Trump administration, and the rise of the #MeToo movement were just a few of the many events and trends that impacted the working world last year. In 2018, we anticipate that some of these developments will continue to reverberate, while new challenges and opportunities will arrive.

Here are some of the major developments that employers can expect to see this year, in the US and around the world:

The Sexual Harassment Reckoning Will Only Grow

In the second half of 2017, revelations of sexual harassment, misconduct, and assault poured out of Silicon Valley and Hollywood, sparking a long-overdue conversation about the treatment of women and the harboring of known abusers in these male-dominated industries, as well as in politics, media, and other fields. Powerful men, from Hollywood moguls to tech CEOs to members of the US Congress, were toppled by multiple allegations of sexual misconduct ranging from inappropriate workplace behavior to outright assault. Organizations in all sectors are facing unprecedented public attention to their sexual harassment policies, how diligently they enforce them, and whether they uphold an inclusive and respectful work environment. If the reckoning didn’t come to your industry in the past few months, it likely will this year. Business leaders in corporate America and around the world will have their past and present behavior scrutinized, and some will be exposed as abusers and face strong public and investor pressure to step down. Addressing toxic workplace cultures that enable sexual harassment will become an issue of even greater concern for directors and HR leaders. Companies can ill afford to close their eyes and hope for this problem to go away on its own; time really is up.

The Private Sector Will Lead the Way on Raising the Minimum Wage

Congress is unlikely to take action to increase the federal minimum wage in 2018. Some states will raise their minimum wages, as will some cities, while other states will take action to preempt local hikes. Meanwhile, companies will take it upon themselves to increase their pay floors in order to attract and retain talent in a tight labor market. As large employers of low-wage hourly workers like Walmart and Target increase their own minimum wages, other companies will need to follow suit to remain competitive.

Technology, Social Media, and Journalists Will Continue to Bring Transparency into Company Culture

Companies’ cultures and employer brands are in the spotlight now more than ever before. The decisions, approaches, policies, and beliefs through which companies manage their employees will play a dramatically larger role in how consumers and investors (not just candidates and employees) view the company. In 2018, this will put pressure on companies to manage their employer brands through HR as aggressively as they protect their consumer brands through PR.

CEOs Will Be Forced to Take Stands on Political And Social Issues

Throughout 2018, the political polarization and dysfunction that has prevailed in Washington, D.C. recently will almost certainly persist, while gender equality, diversity, immigration, LGBT rights, and other issues with major workplace implications will remain hot-button topics. While some CEOs have already found their voices when it comes to responding to the news of the day, others will feel pressure this year from customers, employees, and investors alike to be more vocal about their beliefs and to back them up with concrete actions within their companies.

AI Will Play a Bigger Role In Hiring, Raising the Risk of Algorithmic Bias

The use of AI and algorithms in hiring decisions has already grown dramatically. In 2018, companies will continue to adopt these technologies, but many will also begin to recognize the danger of algorithmic bias. While these automated solutions have shown promise in terms of improving quality, efficiency, and even fairness in the recruiting process, they also run the risk of harming diversity in the workforce by replicating biases that already exist within the company.

Adoption of Wearables in the Workplace Will Increase

In 2017, 3 percent of companies introduced wearable technology in the workplace, giving employees smart badges to monitor their behavior in order to track productivity and identify inefficiencies in the use of office space. In 2018, as more companies adopt technology that can track the location and behavioral data of employees, companies will begin to use this data to redesign workspaces, schedules, and workflows to maximize employee productivity. As these technologies become more mainstream, employers may not have to worry as much as they think about employees resisting their implementation, but should think carefully about how much actionable insight they are gaining by monitoring their employees.

More Employees Will Change Jobs Due to a Lack of Respect

While compensation continues to be the top driver of attraction for candidates globally, respect was the the fourth most important driver in our Global Talent Monitor Report for Q3 2017. In 2018, the labor market will continue to remain tight and employees will feel that they have enough control to speak openly about the lack of respect or appreciation. If companies aren’t able to provide increased compensation or opportunities for growth, they should look at ways to improve employees’ sense of respect in order to retain talent.

When Artificial Intelligence Goes Headhunting

When Artificial Intelligence Goes Headhunting

Over the past few years, recruiting leaders have been struggling to find the best ways to apply a plethora of emerging technologies to their function. The challenge of keeping up with the pace of technological change, along with the potential headache of implementing new tools and the risk of making an expensive mistake, can make recruiting leaders understandably hesitant to take the plunge. Woo, a recruiting platform that matches employers with potential job candidates, is hoping to make it as easy as possible with the launch of Helena, an AI-powered headhunter that automates candidate sourcing and also communicates to the company on behalf of the candidate.

Woo, which has received over $11 million in funding, claims that 52 percent of candidates sourced by Helena advance to the interview stage, compared to about 20 percent of human-sourced candidates. It automatically finds the best candidates, matching them to the company and role description. Helena also makes the first outreach and then works on behalf of both the candidate and the company. In addition, the product includes data about how similar companies’ listings for similar roles are performing and how and why job seekers choose not to pursue an opportunity.

While Woo is currently trying to automate just the start of the recruiting process, its founder and CEO Liran Kotzer tells Forbes he believes they can automate the whole thing:

“The recruitment market is broken,” Kotzer says. “It’s a 200bn market in the US alone – and the problem we have is that 95 per cent of the effort and money spent in that market is wasted. When you have talent and employers trying to find each other – 95 per cent of both of their efforts are going on filtering each other. Even if they go to interview, most interviews end without a hire, so that’s another point where both parties filer each other out…

Read more

AI Alone Won’t Fix Your Hiring Bias

AI Alone Won’t Fix Your Hiring Bias

Numerous technological tools are promising to automate recruiting with the added bonus of helping to eliminate bias from the candidate sourcing and hiring process by using artificial rather than human intelligence to make hiring decisions. AI projects like the Mya chatbot or Project Manager Tara, along with more established companies like Atlassian, SAP, and Hirevue are optimistic that their technology can remove bias, but, as Simon Chandler of Wired points out, “AI is only as good as the data that powers it,” and right now that data is filled with flaws.

There is a great risk in training algorithms with human-generated data because it could program them with the same biases they are hoping to correct. If an algorithm screens applicants based on the traits and characteristics of a company’s current high-performers, the end result will simply be an automated version of the existing biases in the recruiting process. The Atlantic profiled an illustrative example of this where tech startup Gild created a software that helped companies find programming talent. The software collected a lot of publicly-available information to determine a candidate’s likelihood for success, but some of the variables, such as an affinity for a specific website frequented by men, instilled a bias into their rankings. Though it was an indicator of success, using fans of that website as a predictive measure unfairly penalized women.

Our Diversity and Inclusion research team at CEB (now Gartner) has been looking into this challenge of algorithmic bias. Our position, which CEB Diversity and Inclusion Leadership Council members can read in full here, is that the burden of removing this bias is on the people developing the technology, not the end users on the recruiting team.

Read more