ReimagineHR: How Inclusion Nudges Can Augment Your Organization’s D&I Strategy

ReimagineHR: How Inclusion Nudges Can Augment Your Organization’s D&I Strategy

As shown in a growing body of research, including our work at Gartner, companies that invest in diversity see bottom-line benefits including greater innovation and ability to penetrate new markets. Organizations that create inclusive work environments, furthermore, accrue more of these benefits than those that focus on diversity alone. But if inclusion is the key to unleashing the value of diversity, it can also be a heavier lift: Our research shows that most employees—especially frontline employees—don’t think their managers successfully foster an inclusive work environment.

Creating an inclusive environment means, in part, mitigating the impact of conscious and unconscious bias on talent processes like hiring, promotion, and performance management. Most organizations attack this challenge through anti-bias training, which can bolster employees’ confidence in diversity and inclusion efforts but often falls short of bridging the gap between increasing managers’ awareness of bias and actually changing their behavior. Training targets attitudes as opposed to actions, its effects diminish over time, and it requires significant effort and expense to implement at scale.

An essential lesson from our research is that best-practice D&I initiatives don’t just train managers in how to avoid bias, but actually embed bias mitigation into those talent processes. Accordingly, there is now a growing movement within the D&I community to complement anti-bias training with “inclusion nudges”: soft, non-intrusive mental pushes that help us make more objective decisions and affects predictable behaviors to make them more inclusive.

At Gartner’s ReimagineHR conference in Orlando, Florida on Sunday, Gartner’s Jeanine Prime led a panel discussion with Lorelei Whitney, Assistant Vice President Human Resources at Cargill; and Eric Dziedzic, Director, Diversity and Inclusion at Amgen, about their experiences implementing inclusion nudges at their organizations.

What does an inclusion nudge look like?

Read more

Amazon Abandoned AI Recruiting Tool After It Learned to Discriminate Against Women

Amazon Abandoned AI Recruiting Tool After It Learned to Discriminate Against Women

Amazon canceled a multi-year project to develop an experimental automated recruiting engine after the e-commerce giant’s machine learning team discovered that the system was exhibiting explicit bias against women, Reuters reports. The engine, which the team began building in 2014, used artificial intelligence to filter résumés and score candidates on a scale from one to five stars. Within a year of starting the project, however, it became clear that the algorithm was discriminating against female candidates when reviewing them for technical roles.

Because the AI was taught to evaluate candidates based on patterns it found in ten years of résumés submitted to Amazon, most of which came from men, the system “taught itself that male candidates were preferable,” according to Reuters:

It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The company scuttled the project by the start of 2017 after executives lost faith in it. By that time, however, it may have already helped perpetuate gender bias in Amazon’s own hiring practices. The company told Reuters its recruiters never used the engine to evaluate candidates, but did not dispute claims from people familiar with the project that they had had looked at the recommendations it generated.

Read more

Slack’s Unique Diversity Strategy Offers Some Lessons for Silicon Valley and Beyond

Slack’s Unique Diversity Strategy Offers Some Lessons for Silicon Valley and Beyond

The workplace communication and collaboration software startup Slack has garnered attention within the tech sector for its all-in approach to diversity and inclusion, issuing diversity reports at a faster pace and with more detail than their big-company competitors and making a point of giving its D&I commitment lots of visibility. Last month, Slack released its diversity report for 2017. The report touted a few victories, such as a 48 percent female management team and underrepresented minorities making up 12.8 percent of its technical staff, while also stressing the continued work it has to do.

In a profile of the company’s D&I program at the Atlantic on the occasion of that report, Jessica Nordell looked at several aspects of Slack’s approach to diversity that make it stand out from the crowd. One of these idiosyncrasies is that unlike many other tech companies, Slack doesn’t have a Chief Diversity Officer or other designated head of D&I:

While studies by the Harvard University professor Frank Dobbin, and colleagues, suggest having someone overseeing diversity efforts can increase the numbers of underrepresented groups in management, other measures, such as mentoring programs and transparency around what it takes to be promoted, are also important; a diversity chief alone may not be enough to make much of a difference. At Slack, the absence of a single diversity leader seems to signal that diversity and inclusion aren’t standalone missions, to be shunted off to a designated specialist, but are rather intertwined with the company’s overall strategy. As the CEO, Stewart Butterfield, has said, he wants these efforts to be something “everyone is engaged in.” Indeed, as the research by Dobbin and colleagues shows, involving employees in diversity policies leads to greater results.

The first lesson here is not “don’t have an appointed head of D&I,” but rather that there’s no one right way to structurally advance D&I. The Dobbin study makes sense because the D&I chief position ensures there’s always a voice in the room, but if any organization thinks they’ve solved D&I by creating a head of D&I role, they are sorely mistaken. In our work at CEB, now Gartner, we’ve seen organizations make progress with a large, singularly focused D&I function, or with a small but connected D&I function; with D&I reporting to HR, to the CEO, to the General Counsel, or to the Corporate Social Responsibility function.

Read more

Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

How Starbucks Can Make Its Massive Bias Training Count

How Starbucks Can Make Its Massive Bias Training Count

Last month, a manager at a Philadelphia Starbucks called the police on a pair of black men who were waiting in the store for a business meeting and had yet to make any purchases. A cell phone video of the two men’s subsequent arrest, which also captured other patrons’ outrage over the incident as it happened, quickly went viral and prompted a nationwide conversation about the racial profiling that black Americans often face in places of business. For Starbucks, which has sought to establish itself one of America’s most progressive employers, it has created a crisis, raising questions about whether this was truly an isolated incident and whether the roughly 40 percent of Starbucks employees who identify as racial minorities have faced hostility or felt unwelcome in the workplace—as many Americans of color have indicated in surveys that they do.

In an unprecedented response, Starbucks quickly announced an ambitious initiative in which it will close all of its over 8,000 company-owned US stores on May 29 so that nearly 175,000 employees can attend an anti-bias training. By conveying that the company takes this matter seriously and is committed to addressing it, the announcement won the coffee chain praise in the world of public relations, but from the perspective of HR—and Diversity and Inclusion more specifically—the standards for success are much higher and more difficult to meet. To make this response count as more than a PR spectacle, Starbucks will need to demonstrate that it’s not just making the right kind of noise, but actually making meaningful changes that are tangible to its vast numbers of nonwhite customers and employees. Furthermore, whether the initiative succeeds or fails, it stands to have an impact far beyond this one company. The stakes are high and all eyes are on Starbucks.

From the D&I research team at CEB, now Gartner, here are some points Starbucks should keep in mind in designing and deploying this anti-bias initiative—and for HR leaders at other organizations to consider in their own efforts to combat the insidious problem of bias.

Anti-Bias Training Should Encompass all Stakeholders’ Perspectives

To underscore the importance of this training, Starbucks announced that the curriculum would be designed with help from prominent experts in civil rights and racial justice, including former attorney general Eric Holder, President and Director-Counsel of the NAACP Legal Defense Fund Sherrilyn Ifill, and Bryan Stevenson, founder and Executive Director of the Equal Justice Initiative. This A-list roster lends an extra dose of credibility to the initiative, but Starbucks might also consider engaging with the communities they serve to understand the experiences of their nonwhite customers on a more personal level. A great example of this kind of stakeholder-focused inclusion strategy is ANZ Bank’s accessibility initiative for people with disabilities, which involved stakeholders across the workforce, workplace, and marketplace in determining accessibility goals and how the bank would achieve them. (CEB Diversity & Inclusion Leadership Council members can read the case study here.)

Starbucks could also benefit from bringing employees’ voices and experiences into the conversation as opposed to making this a one-way training exercise. To be fair to the staff, they’re often at the frontlines of how the public feels about the company (like the time that a Miami man was videoed screaming “Trump!” at a black Starbucks employee, or the “Trump cup” protest, or the “open carry” protest, or the annual “war on Christmas” protests). Starbucks doesn’t exist to serve the community in the same way as the police or the government, but the company has consistently worked to cultivate a brand image of its cafés as public spaces, which imposes a unique set of challenges for its front-line employees.

Treat Employees as Partners, Not Part of the Problem, in Combating Bias

Read more

Referrals Can Hurt Diversity and Inclusion—But They Don’t Have to

Referrals Can Hurt Diversity and Inclusion—But They Don’t Have to

Employee referrals are much beloved among recruiters because more than most other methods of sourcing, they lead to higher-quality candidates with better chances of working out as a hire and staying with the organization long-term. Unfortunately, referrals have a dark side when it comes to diversity and inclusion, as they tend to benefit candidates from in-groups. SHRM’s Dana Wilkie flags a new study from PayScale showing how referrals tend to advantage white men over women and minorities in the US:

First, referrals benefit white men more than any other demographic group, according to recent research from the compensation data and software provider. Second, those referred by friends and relatives tend to earn less at their new job and be less engaged than those referred by business associates. And finally, a man referred by a business associate can expect, on average, an $8,200 salary increase, while a woman can expect a $3,700 increase. …

Women of any race and men of color are much less likely to receive referrals than their white male counterparts: White women are 12 percent less likely, men of color are 26 percent less likely, and women of color are 35 percent less likely to receive a referral, PayScale found.

Read more

Diversity, Technology Major Themes in LinkedIn’s Recruiting Trends Survey

Diversity, Technology Major Themes in LinkedIn’s Recruiting Trends Survey

Diversity, new interviewing tools, data, and artificial intelligence are the four trends set to have the biggest impact on recruiting in the coming year, according to LinkedIn’s latest Global Recruiting Trends report. Based on a survey of over 9,000 talent leaders and hiring managers worldwide, along with a series of expert interviews, the report underscores the growing role of technology in shaping how companies meet their hiring goals, of which diversity is increasingly paramount. Nonetheless, while many HR leaders see these trends as important, the number of organizations fully acting on them lags far behind.

Diversity was the top trend by far, with 78 percent of respondents saying it was very or extremely important, though only 53 percent said their organizations had mostly or completely adopted diversity-oriented recruiting. In recent years, diversity has evolved from a compliance issue to a major driver of culture and performance, as more and more organizations recognize its bottom-line value. This shift was reflected in the LinkedIn report, with 62 percent of the companies surveyed saying they believed boosting diversity would have a positive impact on financial performance and 78 percent saying they were pursuing it to improve their culture. Additionally, 49 percent are looking to ensure that their workforce better reflects the diversity of their customer base.

Diversity was the only top trend identified in LinkedIn’s survey that wasn’t directly related to technology, but technology is definitely influencing how organizations are pursuing it. In the past year, we have seen the emergence of new software and tools to support diversity and inclusion. The aim of these tools is to remove the human error of unconscious bias from the recruiting process, but it’s important to be aware that automated processes can also develop built-in biases and end up replicating the very problem they are meant to solve. This is an issue we’ve been following in our research at CEB, now Gartner; CEB Diversity and Inclusion Leadership Council members can read more of our insights on algorithmic bias here.

The development of new interview tools and techniques was identified as the second most important trend, with 56 percent saying it was important. The LinkedIn survey found that the most common areas where traditional interviews fail are assessing candidates’ soft skills (63 percent), understanding candidates’ weaknesses (57 percent), the biases of interviewers (42 percent), and the process taking too long (36 percent). The report highlights five new interviewing techniques, all enabled by technology, that aim to address these problems:

Read more