No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.
Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.
At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.
In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”
In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.
In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?
Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.
A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.
At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.
A recent report from the Organization for Economic Cooperation and Development finds that the number of jobs at risk of displacement due to automation in the coming years is probably smaller than previous forecasts have estimated. Nonetheless, the tens of millions of workers in developed countries are still at risk of having their jobs replaced or radically altered by AI and robotics. The Verge’s James Vincent summarizes the report’s findings:
The researchers found that only 14 percent of jobs in OECD countries … are “highly automatable,” meaning their probability of automation is 70 percent or higher. This forecast … is still significant, equating to around 66 million job losses.
In America alone, for example, the report suggests that 13 million jobs will be destroyed because of automation. “As job losses are unlikely to be distributed equally across the country, this would amount to several times the disruption in local economies caused by the 1950s decline of the car industry in Detroit where changes in technology and increased automation, among other factors, caused massive job losses,” the researchers write.
The analysis from the OECD, an inter-governmental organization representing the world’s 35 richest countries, is considerably less disconcerting than previous studies that have calculated the risk of automation at anywhere from 30 percent to fully half of all the work currently being performed globally. One difference between this study and previous ones, Vincent explains, is that it pays greater attention to details like whether a job can be fully or only partly automated and the variations among jobs that may have the same title but whose work differs substantially:
Apple is adding a floor to its offices in downtown Seattle, giving the company enough room to seat nearly 500 employees there, Nat Levy reports at GeekWire:
Apple is preparing to move into another floor at Two Union Square, a 56-story office tower in downtown Seattle, giving it all or part of five floors of the building, GeekWire has learned through permitting documents and visits to the building. The latest move brings Apple to more than 70,000 square feet, which equates to room for somewhere between 350 and 475 people, based on standard corporate leasing ratios for tech companies.
The iPhone maker announced big plans to expand its presence on Puget Sound last year, as Levy’s colleague Todd Bishop reported at the time, after buying up the Seattle-based machine learning startup Turi and establishing a $1 million endowed professorship in artificial intelligence and machine learning at the University of Washington. Competing for AI talent is decidedly the name of the game here, Levy explains, as the northwestern city is emerging as a hub for this new technology. Amazon and Microsoft are based in or near Seattle, while Facebook and Google both have significant footprints there.
All these tech giants are racing toward potentially transformative innovations in AI and machine learning; to this end, they have been grabbing all the experts they can get their hands on for the past few years, often by acqui-hiring startup founders and talent.
Watson Assistant, the latest entry into the AI-powered virtual assistant market, made its debut on Tuesday at IBM’s Think conference in Las Vegas, CNET’s Ben Fox Rubin reports. Unlike Amazon’s consumer-focused Alexa, however, Watson Assistant is an enterprise-oriented technology that “will function as the behind-the-scenes brains for a variety of new digital helpers made by a variety of businesses”:
For example, Watson Assistant is already in use at Munich Airport to power a robot that can tell you directions and gate information. The assistant is in development by BMW for an in-car voice helper. Also, Chameleon Technology in the UK created a Watson Assistant-driven platform called I-VIE that helps people manage their energy usage.
“We looked at the market for assistants and realized there was something else needed to make it easier for companies to use,” said Bret Greenstein, IBM’s global vice president for IoT products. …
Last November, Amazon announced that it was bringing its voice-controlled assistant Alexa into the workplace, launching Alexa for Business at its its annual AWS re:Invent conference. This week, the company revealed how far the enterprise version of Alexa has come, who is using it, and how the product is being applied in business settings. Amazon Chief Technology Officer Werner Vogels expanded on these points in a post on his blog, All Things Distributed:
Voice interfaces are a paradigm shift, and we’ve worked to remove the heavy lifting associated with integrating Alexa voice capabilities into more devices. For example, Alexa Voice Service (AVS), a cloud-based service that provides APIs to interface with Alexa, enables products built using AVS to have access to Alexa capabilities and skills.
We’re also making it easy to build skills for the things you want to do. This is where the Alexa Skills Kit and the Alexa Skills Store can help both companies and developers. Some organizations may want to control who has access to the skills that they build. In those cases, Alexa for Business allows people to create a private skill that can only be accessed by employees in your organization. In just a few months, our customers have built hundreds of private skills that help voice-enabled employees do everything from getting internal news briefings to asking what time their help desk closes.
Alexa for Business is now capable of interfacing with common enterprise applications like Salesforce, Concur, and ServiceNow, Vogels added, while IT developers can use the Alexa Skills Kit to enable custom apps as well. WeWork, one early adopter of the service, has “built private skills for Alexa that employees can use to reserve conference rooms, file help tickets for their community management team, and get important information on the status of meeting rooms.”
A recent Gartner survey of Chief Information Officers finds that while just four percent have already implemented AI in some form in their businesses, 46 percent have plans in place to do so. Although there are many obstacles to implementing this groundbreaking technology, soon companies that fail to take advantage will lag behind. To help ease the potential pains of diving into adoption, our colleagues who conduct IT management research at Gartner have four recommendations to ensure success in the early stages of AI implementation: start small; focus on helping, not replacing, people; plan for knowledge transfer; and choose transparent solutions.
“Don’t fall into the trap of primarily seeking hard outcomes, such as direct financial gains, with AI projects,” Gartner analyst Whit Andrews explains. “In general, it’s best to start AI projects with a small scope and aim for ‘soft’ outcomes, such as process improvements, customer satisfaction or financial benchmarking.”
Early forays into AI should be learning experiences rather than attempts at large-scale change that dramatically reshape a department or function. It’s important to set modest goals for AI initiatives, given that the most important outcome will be gaining the knowledge and expertise to successfully apply the technology to a work stream. Additionally, while many employees fear AI could replace them, the easiest way to assuage those concerns is to deploy AI solutions that make employees’ lives easier. As Gartner EVP Peter Sondergaard remarked in his observations from the recent World Economic Forum in Davos, Switzerland, AI is expected to create many more jobs than it destroys, while generating massive value and saving billions of hours of worker productivity.
That means there’s an opportunity to get employees engaged with AI adoption as a technology that will make their jobs easier, rather than obsolete.