People Are More Open to Algorithmic Judgment Than You Might Think

People Are More Open to Algorithmic Judgment Than You Might Think

When it comes to making judgments based on large data sets, machines are often superior to humans, but many business leaders remain skeptical of the guidance produced by their organizations’ data analytics programs, particularly when it comes to talent analytics. That skepticism derives largely from doubts about the quality of the data the organization is collecting, but there is also a natural tendency among people who make strategic decisions for a living to reject the notion that an algorithm could do parts of their job as well as or better than they can.

While this may be true of executives and high-level professionals, some recent research suggests that most people are actually comfortable with the decisions algorithms make and even more trusting of them than of judgments made by humans. A new study from the Harvard Business School, led by post-doctoral fellow Jennifer M. Logg, finds that “lay people adhere more to advice when they think it comes from an algorithm than from a person”:

People showed this sort of algorithm appreciation when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic matches (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when people chose between an algorithm’s estimate and their own (versus an external advisor’s—Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy.

Our colleagues here at Gartner have also investigated consumers’ attitudes toward AI and found that these attitudes are more welcoming than conventional wisdom might lead you to believe. The 2018 Gartner Consumer AI Perceptions Study found that overall, consumers are not skeptical of the potential usefulness of AI, though they do have some concerns about its impact on their skills, social relationships, and privacy. The study was conducted online during January and February 2018 among 4,019 respondents in the US and UK. Respondents ranged in age from 18 through 74 years old, with quotas and weighting applied for age, gender, region, and income.

Asked about a selection of 13 everyday tasks, respondents to the survey said they would like an AI to perform or help them perform all but four tasks: selecting an outfit for the day, making financial investments, driving the car they are in, and making small purchases. When asked which role they would prefer AI to take if implemented in their workplaces, 84 percent said as a personal assistant (either doing tasks on demand or proactively for them), 11 percent as a co-worker, and 9 percent as a manager.

Consumers are also comfortable with AI supporting their health and safety but wary of AI examining their emotions: Over 70 percent said they were comfortable with AI observing them for health and security (e.g., analyzing vital signs to keep them healthy or identifying their voice or face to keep them safe), but fewer than half were comfortable with AI scrutinizing their emotions (e.g., analyzing facial expressions to understand how they feel).

From a workplace perspective, previous research has found that most employees are not opposed to workplace monitoring through automated methods like sensors or textual analysis of their communications. Sentiment analysis—using that monitoring data to identify or predict employees’ feelings—is a somewhat thornier issue, partly because this technology is new and not yet refined enough to be fully reliable. The concern for HR in using sentiment analysis is not so much that the algorithms making those identifications will get them wrong, but rather that even with high-quality analytics, it can be hard to find clear connections between sentiment data and the issues HR cares about, like employee engagement and intent to stay.

In any case, AI is already taking over many of the day-to-day tasks of knowledge workers in fields like finance and insurance, where the power to crunch vast data sets and use them to make judgments has clear and obvious applications. Using algorithms to augment their own expertise is fast becoming part of many professionals’ job descriptions. Leaders wary of handing decision-making over to machines might bear in mind that these algorithms’ judgment can and should be subject to periodic performance reviews, just like that of a human employee.