Talent Daily Debates: Are Everlane’s ‘Passion Days’ Effective Cultural Onboarding?

Talent Daily Debates: Are Everlane’s ‘Passion Days’ Effective Cultural Onboarding?

Smart executives know that an organization’s culture drives top-line growth, but it can be difficult and time-consuming for new hires to learn the ins and outs of the culture as they get up to speed. Companies are constantly searching for more innovative and effective ways for their new employees to learn the culture. For example, l’Oreal released its Fit Culture App for new hires last year, which uses “texts, videos, employee testimonials, … quizzes, games and real-life missions” to “give each and every employee, from the moment they arrive, the keys to succeed in full alignment with company values such as multiculturalism, diversity and inclusion.”

More recently, Quartz’s Leah Fessler profiled the onboarding program at the ethical clothing company Everlane, which sets the cultural tone from day one by making every new employee’s first day a “Passion Day”:

“It’s called a passion day,” says Michael Preysman, CEO of the direct-to-consumer clothing startup, which hit $100 million in revenue in 2016. Every Everlane employee starts their new job with a passion day, on which they’re given $100 to spend doing something they love. … There are no limits on what the cash can be spent on, so long as it’s outside of the office and legal. And while they’re not warned ahead of time, every employee has to share how they spent their cash upon being introduced to the entire company the following week. …

Passion days are an extension of an already hyper-individualized hiring process. Everyone who applies to Everlane has to complete a project, regardless of their seniority, to evaluate their skills. “One of our core values is to hire people who are entrepreneurial thinkers—people who are creative and passionate,” Preysman says.

Some of our expert researchers at CEB, now Gartner, had different points of view on whether Everlane’s Passion Day program is an idea worth emulating. Here’s what they had to say:

Andrea Kropp, Research Director: It’s great to see companies putting action and money behind their culture initiatives, especially when the culture they are striving for is very different from the norm. The vast majority of new hires have worked somewhere else before, even if just part-time or in a family business, so they’ve already been exposed to someone else’s culture. If you know your culture is dramatically different, you need something attention-grabbing to show new hires that you are serious and not just paying lip service to the idea of being different.

Read more

Careful Who You Trust for GDPR Advice

Careful Who You Trust for GDPR Advice

The EU’s General Data Protection Regulation, which went into effect on May 25, imposes new data privacy obligations on all organizations that process the data of EU citizens, whether or not they are based in Europe themselves. The maximum penalties for noncompliance are hefty, so it is essential for businesses to ensure that their practices are GDPR-compliant if they haven’t already.

According to a survey on the eve of the regulation coming into effect, however, most organizations have not yet finished making the required changes, while many do not expect to be fully compliant by the end of this year. Much work still remains to be done to bring organizations into initial compliance with the regulation, and still more work to re-develop data collection, storage, and analytics programs in a compliant manner.

With every organization doing a huge amount of work for the first time and trying to get right with the GDPR as quickly as possible, this makes for a fertile environment for bad information to circulate and for opportunists to take advantage of organizations’ unfamiliarity with the new regulatory terrain. Organizational leaders need to be vigilant about which “experts” to trust for guidance on GDPR compliance, take advantage of the information provided directly by the European Commission, and bear in mind that different functions, particularly HR, face unique compliance challenges.

Step 1: Beware of Charlatans

The proliferation of bad advice and information is a simple matter of supply and demand. Demand for advice is high, both because of the global impact of the GDPR and because so many organizations were not proactive in planning for compliance are now scrambling to catch up. The supply of that advice is scarce and of uneven quality, with no historical track record of performance. Over the past few months, many companies have been assembling data protection functions and hiring data protection officers (DPOs), causing a run on the thin supply of qualified talent for these roles.

Read more

Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

Do Your Algorithms Need a Performance Review?

Do Your Algorithms Need a Performance Review?

In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?

Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.

A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.

At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.

Read more