Talent analytics has rapidly grown from an experimental trend into something every organization uses. While many HR functions are investing in analytics, however, few are getting the kind of results they’d like to see. If the promise of talent analytics remains unfulfilled today, it’s not because the technology isn’t ready. Over the past two years, we have heard from HR leaders that their biggest challenge in implementing analytics has been in connecting the data to critical business questions and drawing actionable intelligence from it. Gartner research has also found that collecting high-quality, credible data is a significant hurdle for many organizations.
Perhaps as a result of these growing pains, a global survey earlier this year found that most C-suite leaders don’t have a high level of trust in their analytics programs. HR is still under pressure to get senior leadership on board with talent analytics and prove its value to the bottom line.
At Gartner’s ReimagineHR event in London last Wednesday, Principal Executive Advisor Clare Moncrieff moderated a discussion with a panel of leaders at major companies on the practical lessons they have learned in applying talent analytics on the ground. The panelists were Christian Cormack, Global Head of Workforce Analytics at AstraZeneca; Nanne Brouwer, Head of People Strategy and Analytics at Royal Philips; and Jacob Jeppesen, Specialist in HR Analytics at Novo Nordisk A/S.
The limiting factor for talent analytics professionals is rarely their knowledge of analytics, the panelists observed. Rather, it’s their knowledge of the rest of the business. Understanding how other business functions like supply chain or strategy work allows them to combine different sources of data that have never been looked at together before. This combination of data is ultimately more valuable than extremely advanced analytics that focus only on people data.
“With more informed buyers to contend with and data as their most powerful sales weapon, sales teams are incorporating more STEM backgrounds within their ranks,” Jared Lindzon writes at Fast Company, in a piece exploring how data and technology skills are becoming as important as interpersonal skills for sales professionals, if not more so:
According to a 2017 study by the Bureau of Labor Statistics, the seventh most popular career for STEM graduates in the United States and most popular noncomputer related role is in sales. … “We are seeing thousands of jobs across the United States in which sales teams are looking for people with STEM related skill sets,” says Glassdoor community expert Scott Dobroski. According to Dobroski the job listing and recruiting website has seen a huge spike in postings for positions that blend sales with STEM skills. …
The demand for STEM skills within sales teams is representative of a seismic shift in sales strategy. This transition has been enabled by technology and the availability of information, both on behalf of the buyer and seller. While the salesperson used to be the primary source of information for their products or services, buyers increasingly have access to specs, samples, and independent reviews. At the same time sellers are able to access information and insights about prospective buyers that would have previously been only accessible through personal interactions.
The nature of the sales role has indeed changed in today’s business environment, especially in B2B sales, where the typical buyer is now most of the way through their decision-making process before engaging with a supplier. This means salespeople need to be comfortable wielding more facts and figures, but also must be adept at managing relationships.
No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.
Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.
At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.
In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”
In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.
In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?
Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.
A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.
At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.
Glassdoor has released its annual list of the best jobs in America for 2018, ranked based on earning potential, job satisfaction, and availability. For the third year running, data scientist took the top spot, while other data and technology roles dominated the list, such as DevOps engineer (#2), electrical engineer (#6), mobile developer (#8), and manufacturing engineer (#10). All in all, technical roles make up 20 out of the 50 best jobs. The rest of the list comprises a variety of management roles, as well as several jobs in the health care sector.
“But there are at least four new titles on the list that help crunch that data and make decisions based on what they suggest,” Washington Post columnist Jena McGregor points out:
These include strategy managers (No. 7), business development managers (No. 14), business intelligence developers (No. 42) and business analysts (No. 43), each of which make the list for the first time, said Scott Dobroski, a career trends analyst at Glassdoor.
“There’s always a lot of tech jobs and health-care jobs — that’s not new and not going away anytime soon,” Dobroski said. “But the biggest trend this year was this emerging theme of business operations,” he said, or people “who make sense of all that data and recommend business decisions.” Many of the people hired for these jobs, he said, are former consultants who companies are bringing in-house to help with strategic and market decision-making.
“Maybe the occupational therapist and the HR manager jobs are in there because those folks are needed to deal with anyone who is not already a data scientist?” GeekWire’s Kurt Schlosser quips.
Using its vast trove of user data, LinkedIn compared the US talent landscape in 2012 and 2017 to see what roles had grown the most in demand in that time. At the top of the professional networking site’s list of the top 20 fastest-growing jobs is “machine learning engineer,” the ranks of which have expanded nearly tenfold in the past five years, followed by “data scientist,” “sales development representative,” “customer success manager,” “big data developer,” and “full stack engineer.”
The proliferation of digital roles such as data scientist is unsurprising, given that these jobs are no longer limited to “tech companies” but are now needed in all sorts of organizations. However, Maria Ignatova notes at LinkedIn’s Talent Blog, there are two other key takeaways from the list that employers can learn from:
Hiring for outstanding soft skills is a high priority: Many of the roles on the list are customer-facing and underscore the importance of being able to screen candidates for soft skills. Traditionally, that has been one of the most challenging parts of the hiring process, with standard interviews just not cutting it. Many companies now are starting to use soft skills assessments or job auditions to see candidates in a more authentic light.
Some roles are so new, that the current talent pool is minimal: A few of the jobs on this list didn’t even exist five years ago, or if they did, they were niche with very few professionals in these roles. This means that you have to get creative when it comes to sourcing talent and be willing to approach people from different fields and consider non-standard skillsets. Reskilling the workforce due to shortage of talent is one of the top trends that will impact you if you are hiring for these roles.
LinkedIn’s findings also point to something we’ve found in our research at CEB, now Gartner: The convergence of demand around a smaller number of critical roles. Among S&P 100 companies, we found, 39 percent of job postings last year were for just 29 roles.
Among the critical skills in today’s job market, data science expertise is perhaps the most coveted in terms of high demand and short supply. As businesses in a wide variety of industries find new applications for data analytics, the limited pool of specialized data scientists can work pretty much anywhere they want and command a highly competitive salary. This September, New York University is launching a new PhD program in data science both to address this skills shortage, particularly in New York’s financial sector, and shape the field of data science as an independent academic discipline, Ivan Levingston and Taylor Hall report at SF Gate:
It’s one of the first such programs in the nation and builds on master’s degrees at NYU and other schools. MIT is gearing up a doctoral degree that includes data science, and Harvard plans to jump into the field with a master’s program in 2018. In the near absence of degree programs, investment firms must sort through the wannabes and find skilled data scientists from fields like physics and math.
“The term is a fairly loose term, and it can mean anything from somebody who’s an extreme expert in machine learning all the way down to someone who’s really more of a data analyst, preparing and cleaning data and producing charts, and it can mean everything in between,” said Matthew Granade, who oversees Point72 Asset Management’s data science unit, Aperio.