Amazon Abandoned AI Recruiting Tool After It Learned to Discriminate Against Women

Amazon Abandoned AI Recruiting Tool After It Learned to Discriminate Against Women

Amazon canceled a multi-year project to develop an experimental automated recruiting engine after the e-commerce giant’s machine learning team discovered that the system was exhibiting explicit bias against women, Reuters reports. The engine, which the team began building in 2014, used artificial intelligence to filter résumés and score candidates on a scale from one to five stars. Within a year of starting the project, however, it became clear that the algorithm was discriminating against female candidates when reviewing them for technical roles.

Because the AI was taught to evaluate candidates based on patterns it found in ten years of résumés submitted to Amazon, most of which came from men, the system “taught itself that male candidates were preferable,” according to Reuters:

It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The company scuttled the project by the start of 2017 after executives lost faith in it. By that time, however, it may have already helped perpetuate gender bias in Amazon’s own hiring practices. The company told Reuters its recruiters never used the engine to evaluate candidates, but did not dispute claims from people familiar with the project that they had had looked at the recommendations it generated.

Read more

Google Adds New Search Feature to Help Veterans Find Jobs

Google on Monday introduced a feature in its job search functionality specifically geared toward helping veterans find jobs. Matthew Hudson, a program manager for Google Cloud who previously served in the US Air Force as a civil engineer, announced the news in a blog post:

Starting today, service members can search ‘jobs for veterans’ on Google and then enter their specific military job codes (MOS, AFSC, NEC, etc.) to see relevant civilian jobs that require similar skills to those used in their military roles. We’re also making this capability available to any employer or job board to use on their own property through our Cloud Talent Solution. As of today, service members can enter their military job codes on any career site using Talent Solution, including FedEx Careers, Encompass Health Careers, Siemens Careers, CareerBuilder and Getting Hired.

This is just one of several steps the search giant is taking to support veterans. To help those who start their own businesses, Google will now allow establishments to identify themselves as veteran-owned or led when they pop up on Google Maps or in Google search mobile listings. Additionally, Google.org is giving a $2.5 million grant to the United Service Organizations (USO) to incorporate the Google IT support certificate into their programming. Google first made the certification available outside the company earlier this year through a partnership with Coursera.

Read more

Several Companies Developing Tools to Address Algorithmic Bias

Several Companies Developing Tools to Address Algorithmic Bias

As machine learning algorithms are called upon to make more decisions for organizations, including talent decisions like recruiting and assessment, it’s becoming even more crucial to make sure that the performance of these algorithms is regularly monitored and reviewed just like the performance of an employee. While automation has been held up as a way to eliminate errors of human judgment from bias-prone processes like hiring, in reality, algorithms are only as good as the data from which they learn, and if that data contains biases, the algorithm will learn to emulate those biases.

The risk of algorithmic bias is a matter of pressing concern for organizations taking the leap into AI- and machine learning-enhanced HR processes. The most straightforward solution to algorithmic bias is to rigorously scrutinize the data you are feeding your algorithm and develop checks against biases that might arise based on past practices. Diversifying the teams that design and deploy these algorithms can help ensure that the organization is sensitive to the biases that might arise. As large technology companies make massive investments in these emerging technologies, they are also becoming aware of these challenges and looking for technological solutions to the problem as well. At Fast Company last week, Adele Peters took a look at Accenture’s new Fairness Tool, a program “designed to quickly identify and then help fix problems in algorithms”:

The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. “In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference between how correct the model is for, say, a white man versus a black woman,” [Rumman Chowdhury, Accenture’s global responsible AI lead,] says.

Read more

Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

Do Your Algorithms Need a Performance Review?

Do Your Algorithms Need a Performance Review?

In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?

Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.

A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.

At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.

Read more

Gartner’s Peter Sondergaard on Technology’s Future in the Workplace

Gartner’s Peter Sondergaard on Technology’s Future in the Workplace

The 2018 World Economic Forum, recently concluded in Davos, Switzerland, brought together political, business, and cultural leaders from around the globe to discuss the future of the global economy and its foremost institutions. Gartner EVP Peter Sondergaard was on hand to take in the events and speak with influencers at the forum, where he observed a few key themes in discussions of the future of the workplace: The increasingly digital nature of business, the rise of artificial intelligence, and the impact technology can have on improving diversity and inclusion.

“It became abundantly clear that organizations have reached the point at which the digital workplace must be driven by both CIOs and heads of HR,” Sondergaard explained. This doesn’t mean technology will eliminate the need for people, just that employees will need to work in different ways and companies will need to offer guidance on how to do that. “Such changes will require new models of learning and development,” he continued, “as well as the creation of hybrid workplaces that combine technology and information to accommodate a mix of employees.”

Certainly, we have seen a wide range of technologies promise to reshape how the people and processes of the workplace operate, but artificial intelligence is the driving force behind the most groundbreaking offerings. It’s powering Google Jobs, wearable tech, analytical tools, and voice-activated tech such as Amazon’s Alexa, as well as the automation of processes from candidate sourcing to performance management. As a result, demand for AI talent has skyrocketed as technology providers are scrambling to keep up with the rapid rate of change.

While the rise of AI has fueled fears of the potential for a massive loss of jobs, Sondergaard is confident that AI should ultimately create jobs if deployed properly. “As was true of the Industrial Revolution,” he also pointed out, “technological advances as a result of AI will spur job creation. In 2020, AI will create 2.3 million jobs, while eliminating 1.8 million — a net growth of half a million new positions. Organizations will realize an added benefit as in 2021 AI augmentation will generate $2.9 trillion of business value and save 6.2 billion hours of worker productivity.”

Read more

How the Workplace Will Change in 2018

How the Workplace Will Change in 2018

Over the past few years, we have witnessed a marked acceleration in the pace of change in the workplace. Each year brings with it new innovations, ideas, and passing fads, as well as social, political, and economic events that affect employers all across the world. 2017 was no exception: Tight labor markets driving competition for talent, concerns over automation and displacement amid the growing embrace of new technologies, the first year of the Trump administration, and the rise of the #MeToo movement were just a few of the many events and trends that impacted the working world last year. In 2018, we anticipate that some of these developments will continue to reverberate, while new challenges and opportunities will arrive.

Here are some of the major developments that employers can expect to see this year, in the US and around the world:

The Sexual Harassment Reckoning Will Only Grow

In the second half of 2017, revelations of sexual harassment, misconduct, and assault poured out of Silicon Valley and Hollywood, sparking a long-overdue conversation about the treatment of women and the harboring of known abusers in these male-dominated industries, as well as in politics, media, and other fields. Powerful men, from Hollywood moguls to tech CEOs to members of the US Congress, were toppled by multiple allegations of sexual misconduct ranging from inappropriate workplace behavior to outright assault. Organizations in all sectors are facing unprecedented public attention to their sexual harassment policies, how diligently they enforce them, and whether they uphold an inclusive and respectful work environment. If the reckoning didn’t come to your industry in the past few months, it likely will this year. Business leaders in corporate America and around the world will have their past and present behavior scrutinized, and some will be exposed as abusers and face strong public and investor pressure to step down. Addressing toxic workplace cultures that enable sexual harassment will become an issue of even greater concern for directors and HR leaders. Companies can ill afford to close their eyes and hope for this problem to go away on its own; time really is up.

The Private Sector Will Lead the Way on Raising the Minimum Wage

Congress is unlikely to take action to increase the federal minimum wage in 2018. Some states will raise their minimum wages, as will some cities, while other states will take action to preempt local hikes. Meanwhile, companies will take it upon themselves to increase their pay floors in order to attract and retain talent in a tight labor market. As large employers of low-wage hourly workers like Walmart and Target increase their own minimum wages, other companies will need to follow suit to remain competitive.

Technology, Social Media, and Journalists Will Continue to Bring Transparency into Company Culture

Companies’ cultures and employer brands are in the spotlight now more than ever before. The decisions, approaches, policies, and beliefs through which companies manage their employees will play a dramatically larger role in how consumers and investors (not just candidates and employees) view the company. In 2018, this will put pressure on companies to manage their employer brands through HR as aggressively as they protect their consumer brands through PR.

CEOs Will Be Forced to Take Stands on Political And Social Issues

Throughout 2018, the political polarization and dysfunction that has prevailed in Washington, D.C. recently will almost certainly persist, while gender equality, diversity, immigration, LGBT rights, and other issues with major workplace implications will remain hot-button topics. While some CEOs have already found their voices when it comes to responding to the news of the day, others will feel pressure this year from customers, employees, and investors alike to be more vocal about their beliefs and to back them up with concrete actions within their companies.

AI Will Play a Bigger Role In Hiring, Raising the Risk of Algorithmic Bias

The use of AI and algorithms in hiring decisions has already grown dramatically. In 2018, companies will continue to adopt these technologies, but many will also begin to recognize the danger of algorithmic bias. While these automated solutions have shown promise in terms of improving quality, efficiency, and even fairness in the recruiting process, they also run the risk of harming diversity in the workforce by replicating biases that already exist within the company.

Adoption of Wearables in the Workplace Will Increase

In 2017, 3 percent of companies introduced wearable technology in the workplace, giving employees smart badges to monitor their behavior in order to track productivity and identify inefficiencies in the use of office space. In 2018, as more companies adopt technology that can track the location and behavioral data of employees, companies will begin to use this data to redesign workspaces, schedules, and workflows to maximize employee productivity. As these technologies become more mainstream, employers may not have to worry as much as they think about employees resisting their implementation, but should think carefully about how much actionable insight they are gaining by monitoring their employees.

More Employees Will Change Jobs Due to a Lack of Respect

While compensation continues to be the top driver of attraction for candidates globally, respect was the the fourth most important driver in our Global Talent Monitor Report for Q3 2017. In 2018, the labor market will continue to remain tight and employees will feel that they have enough control to speak openly about the lack of respect or appreciation. If companies aren’t able to provide increased compensation or opportunities for growth, they should look at ways to improve employees’ sense of respect in order to retain talent.