Google Adds New Search Feature to Help Veterans Find Jobs

Google on Monday introduced a feature in its job search functionality specifically geared toward helping veterans find jobs. Matthew Hudson, a program manager for Google Cloud who previously served in the US Air Force as a civil engineer, announced the news in a blog post:

Starting today, service members can search ‘jobs for veterans’ on Google and then enter their specific military job codes (MOS, AFSC, NEC, etc.) to see relevant civilian jobs that require similar skills to those used in their military roles. We’re also making this capability available to any employer or job board to use on their own property through our Cloud Talent Solution. As of today, service members can enter their military job codes on any career site using Talent Solution, including FedEx Careers, Encompass Health Careers, Siemens Careers, CareerBuilder and Getting Hired.

This is just one of several steps the search giant is taking to support veterans. To help those who start their own businesses, Google will now allow establishments to identify themselves as veteran-owned or led when they pop up on Google Maps or in Google search mobile listings. Additionally, Google.org is giving a $2.5 million grant to the United Service Organizations (USO) to incorporate the Google IT support certificate into their programming. Google first made the certification available outside the company earlier this year through a partnership with Coursera.

Read more

Several Companies Developing Tools to Address Algorithmic Bias

Several Companies Developing Tools to Address Algorithmic Bias

As machine learning algorithms are called upon to make more decisions for organizations, including talent decisions like recruiting and assessment, it’s becoming even more crucial to make sure that the performance of these algorithms is regularly monitored and reviewed just like the performance of an employee. While automation has been held up as a way to eliminate errors of human judgment from bias-prone processes like hiring, in reality, algorithms are only as good as the data from which they learn, and if that data contains biases, the algorithm will learn to emulate those biases.

The risk of algorithmic bias is a matter of pressing concern for organizations taking the leap into AI- and machine learning-enhanced HR processes. The most straightforward solution to algorithmic bias is to rigorously scrutinize the data you are feeding your algorithm and develop checks against biases that might arise based on past practices. Diversifying the teams that design and deploy these algorithms can help ensure that the organization is sensitive to the biases that might arise. As large technology companies make massive investments in these emerging technologies, they are also becoming aware of these challenges and looking for technological solutions to the problem as well. At Fast Company last week, Adele Peters took a look at Accenture’s new Fairness Tool, a program “designed to quickly identify and then help fix problems in algorithms”:

The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. “In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference between how correct the model is for, say, a white man versus a black woman,” [Rumman Chowdhury, Accenture’s global responsible AI lead,] says.

Read more

People Are More Open to Algorithmic Judgment Than You Might Think

People Are More Open to Algorithmic Judgment Than You Might Think

When it comes to making judgments based on large data sets, machines are often superior to humans, but many business leaders remain skeptical of the guidance produced by their organizations’ data analytics programs, particularly when it comes to talent analytics. That skepticism derives largely from doubts about the quality of the data the organization is collecting, but there is also a natural tendency among people who make strategic decisions for a living to reject the notion that an algorithm could do parts of their job as well as or better than they can.

While this may be true of executives and high-level professionals, some recent research suggests that most people are actually comfortable with the decisions algorithms make and even more trusting of them than of judgments made by humans. A new study from the Harvard Business School, led by post-doctoral fellow Jennifer M. Logg, finds that “lay people adhere more to advice when they think it comes from an algorithm than from a person”:

People showed this sort of algorithm appreciation when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic matches (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when people chose between an algorithm’s estimate and their own (versus an external advisor’s—Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy.

Our colleagues here at Gartner have also investigated consumers’ attitudes toward AI and found that these attitudes are more welcoming than conventional wisdom might lead you to believe. The 2018 Gartner Consumer AI Perceptions Study found that overall, consumers are not skeptical of the potential usefulness of AI, though they do have some concerns about its impact on their skills, social relationships, and privacy. The study was conducted online during January and February 2018 among 4,019 respondents in the US and UK. Respondents ranged in age from 18 through 74 years old, with quotas and weighting applied for age, gender, region, and income.

Read more

Who’s to Blame for a Biased Algorithm?

Who’s to Blame for a Biased Algorithm?

No one ever intends to create a biased algorithm and there are huge downsides for using one, so why do these algorithms keep appearing, and whose fault is it when they do? The simplest explanation for why algorithmic bias keeps happening is that it is legitimately hard to avoid. As for the second question, there is no consensus between algorithm developers and their customers about who is ultimately responsible for quality. In reality, they are both to blame.

Vendors and in-house data science teams have a lot of options for mitigating bias in their algorithms, from reducing cognitive biases, to including more female programmers, to checklists of quality tests to run, to launching AI ethics boards. Unfortunately, they are seldom motivated to take these steps proactively because doing so lengthens their timeframes and raises the risk of an adverse finding that can derail a project indefinitely.

At the same time, clients are not asking for more extensive oversight or testing beyond what the developer offers them. The client usually doesn’t know enough about how these algorithms work to ask probing questions that might expose problems. As a result, the vendor doesn’t test or take precautions beyond their own minimum standards, which can vary widely.

In a recent interview with Employee Benefit News, HireVue’s Chief IO Psychologist Nathan Mondragon discussed a situation in which his company built a client an employee selection algorithm that failed adverse impact tests. The bias, Mondragon said, was not created by HireVue’s algorithm, but rather already existed in the company’s historical hiring data, skewing the algorithm’s results. In his description, they told the customer: “There’s no bias in the algorithm, but you have a bias in your hiring decisions, so you need to fix that or … the system will just perpetuate itself.”

In this case, Mondragon is right that responsibility for the bias identified in the adverse impact test began with the client. However, I would argue that vendors who do this work repeatedly for many clients should anticipate this outcome and accept some responsibility for not detecting the bias at the start of the project or mitigating it in the course of algorithm development. Finding out that bias exists in the historical data only at the adverse impact testing phase, typically one of the last steps, is the developer’s fault.

Read more

Do Your Algorithms Need a Performance Review?

Do Your Algorithms Need a Performance Review?

In today’s digital organizations, HR departments are increasingly using algorithms to aid in their decision-making, by predicting who is a retention risk, who is ready for a promotion, and whom to hire. For the employees and candidates subjected to these decisions, these are important, even life-changing, events, and so we would would expect the people making them to be closely supervised and held to a set of known performance criteria. Does anyone supervise the algorithms in the same way?

Algorithms don’t monitor themselves. Replacing a portion of your recruiting team with AI doesn’t obviate the need to manage the performance of that AI in the same way you would have managed the performance of the recruiter. To ensure that the decisions of an AI-enhanced HR function are fair, accurate, and right for the business, organizations must establish performance criteria for algorithms and a process to review them periodically.

A recent special report in The Economist illustrates the significant extent to which AI is already changing the way HR works. The report covers eight major companies that are now using algorithms in human resource management, which they either developed internally or bought from a growing field of vendors for use cases including recruiting, internal mobility, retention risk, and pay equity. These practices are increasingly mainstream; 2018 may mark the year of transition between “early adopters” and “early majority” in the life cycle of this technology.

At this point in time, it is essential that leaders ask themselves whether their organizations have management practices in place to supervise the decisions of these algorithms. The Economist concludes their piece with a reminder about transparency, supervision, and bias, noting that companies “will need to ensure that algorithms are being constantly monitored,” particularly when it comes to the prevention of bias.

Read more

How Can Organizations Avoid Algorithmic Bias?

How Can Organizations Avoid Algorithmic Bias?

Eliminating personal biases from recruiting and performance management is supposed to be one of the advantages artificial intelligence has over human beings when it comes to organizational decision-making. As AI and other new technologies have made their presence felt more and more in the business world over the past few years, several platforms have emerged that promise to remove bias from the hiring process through various technological fixes, and one startup has even created an AI project manager that it says is capable of hiring programmers based solely on the quality of their work.

On the other hand, algorithms and AIs are designed by human beings with biases of their own, and the data fed to machine learning programs to teach them how to think can reflect systemic biases within the society that generated that data. That’s why two researchers at Google and Microsoft have launched a new initiative called AI Now to understand the social implications of an algorithmically-managed world. Will Knight profiles the initiative at the MIT Technology Review:

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products. “It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.” …

A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and they aren’t transparent about how they operate.

The risk of algorithmic bias is important for employers to keep in mind as they explore the new options available for incorporating AI and machine learning in recruiting. Fortunately, if created and deployed wisely, algorithmic recruiting methods can avoid these pitfalls and in fact be less biased than that a process that relies on human judgment, Jean Martin, talent solutions architect at CEB (now Gartner), and Aman Alexander write at Recruiting Trends:

Read more

Morgan Stanley Arms Advisors with Algorithms as Finance Embraces Automation

Morgan Stanley Arms Advisors with Algorithms as Finance Embraces Automation

Morgan Stanley is poised to implement a project known as “next best action,” which will equip its 16,000 financial advisors with machine learning algorithms to help them provide better and more timely recommendations to clients, Hugh Son reports at Bloomberg:

At Morgan Stanley, algorithms will send employees multiple-choice recommendations based on things like market changes and events in a client’s life, according to Jeff McMillan, chief analytics and data officer for the bank’s wealth-management division. Phone, email and website interactions will be cataloged so machine-learning programs can track and improve their suggestions over time to generate more business with customers, he said. …

The idea is that advisers, who typically build relationships with hundreds of clients over decades, face an overwhelming amount of information about markets and the lives of their wealthy wards. New York-based Morgan Stanley is seeking to give humans an edge by prodding them to engage at just the right moments.

The move comes at a time when the financial industry is moving increasingly toward automation overall, with artificial intelligence beginning to replace human talent at some insurers and hedge funds. Back in February, Son reported that JPMorgan Chase’s machine learning software had taken over tasks that used to consume 360,000 hours of lawyers’ and loan officers’ time each year, and in January, GeekWire’s Dan Richman took a look at a project by the “robo-advisor” company LendingRobot billed as a fully automated hedge fund, using machine learning and blockchain technology to invest without any human intervention.

Read more