Building cutting-edge technological capabilities within their existing workforce is among the most pressing business challenges organizations face today. The accountancy firm PwC is taking a notably aggressive approach to this upskilling project, giving employees as much as 18-24 months to devote to immersive learning of new skills, with half their time spent training in these skills and the other half working with clients to put them to use. Ron Miller recently profiled the PwC’s Digital Accelerator program at TechCrunch:
[Sarah McEneaney, digital talent leader at PwC] estimates if a majority of the company’s employees eventually opt in to this retraining regimen, it could cost some serious cash, around $100 million. That’s not an insignificant sum, even for a large company like PwC, but McEneaney believes it should pay for itself fairly quickly. As she put it, customers will respect the fact that the company is modernizing and looking at more efficient ways to do the work they are doing today. …
Members of the program are given a 3-day orientation. After that they follow a self-directed course work. They are encouraged to work together with other people in the program, and this is especially important since people will bring a range of skills to the subject matter from absolute beginners to those with more advanced understanding. People can meet in an office if they are in the same area or a coffee shop or in an online meeting as they prefer. Each member of the program participates in a Udacity nano-degree program, learning a new set of skills related to whatever technology speciality they have chosen.
The program focuses on a critical set of digital skills that are increasingly in-demand and where expertise is in short supply: data and analytics, automation and robotics, and AI and machine learning. McEneany and PwC’s Chief People Officer Mike Fenlon expanded on their philosophy in a recent piece at the Harvard Business Review, detailing the process through which the program was designed and touting its success at fostering innovation and a growth mindset throughout the organization:
Amazon canceled a multi-year project to develop an experimental automated recruiting engine after the e-commerce giant’s machine learning team discovered that the system was exhibiting explicit bias against women, Reuters reports. The engine, which the team began building in 2014, used artificial intelligence to filter résumés and score candidates on a scale from one to five stars. Within a year of starting the project, however, it became clear that the algorithm was discriminating against female candidates when reviewing them for technical roles.
Because the AI was taught to evaluate candidates based on patterns it found in ten years of résumés submitted to Amazon, most of which came from men, the system “taught itself that male candidates were preferable,” according to Reuters:
It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.
The company scuttled the project by the start of 2017 after executives lost faith in it. By that time, however, it may have already helped perpetuate gender bias in Amazon’s own hiring practices. The company told Reuters its recruiters never used the engine to evaluate candidates, but did not dispute claims from people familiar with the project that they had had looked at the recommendations it generated.
After successfully piloting its AI-enhanced job search technology, Cloud Talent Solution, with select customers including Johnson & Johnson and CareerBuilder, Google made the product publicly available last week, VentureBeat reported:
Cloud Talent Solution, which launched as Cloud Jobs API in 2016, is a development platform for job search workloads that factors in desired commute time, mode of transit, and other preferences in matching employers with job seekers. It also powers automated job alerts and saved search alerts. According to Google, CareerBuilder, which uses Cloud Talent Solution, saw a 15 percent lift in users who view jobs sent through alerts and 41 percent increase in “expression of interest” actions from those users.
Alongside the public launch of Cloud Talent Solution, Google introduced a new feature to the toolset: profile search. It allows staffing agencies and enterprise hiring companies to, using natural phrases like “front-end engineer” or “mid-level manager,” sift quickly through databases of past candidates. Profile search is available today in private beta.
Organizations can try Cloud Talent Solution out for free (pricing kicks in at over 10,000 queries per month) directly through the Google Cloud platform, or request access through one of Google’s talent acquisition technology provider partners.
The public rollout of Cloud Talent Solution is another sign of Google’s extensive investment in AI and machine learning and the rapidly growing application of these technologies to talent acquisition and management. It is just one of several avenues through which Google is moving into the recruiting market.
As machine learning algorithms are called upon to make more decisions for organizations, including talent decisions like recruiting and assessment, it’s becoming even more crucial to make sure that the performance of these algorithms is regularly monitored and reviewed just like the performance of an employee. While automation has been held up as a way to eliminate errors of human judgment from bias-prone processes like hiring, in reality, algorithms are only as good as the data from which they learn, and if that data contains biases, the algorithm will learn to emulate those biases.
The risk of algorithmic bias is a matter of pressing concern for organizations taking the leap into AI- and machine learning-enhanced HR processes. The most straightforward solution to algorithmic bias is to rigorously scrutinize the data you are feeding your algorithm and develop checks against biases that might arise based on past practices. Diversifying the teams that design and deploy these algorithms can help ensure that the organization is sensitive to the biases that might arise. As large technology companies make massive investments in these emerging technologies, they are also becoming aware of these challenges and looking for technological solutions to the problem as well. At Fast Company last week, Adele Peters took a look at Accenture’s new Fairness Tool, a program “designed to quickly identify and then help fix problems in algorithms”:
The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. “In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference between how correct the model is for, say, a white man versus a black woman,” [Rumman Chowdhury, Accenture’s global responsible AI lead,] says.
Google and the online learning platform Coursera are launching a five-course machine learning specialization to teach developers how to build machine learning models using the TensorFlow framework, Frederic Lardinois reports at TechCrunch:
The new specialization, called “Machine Learning with TensorFlow on Google Cloud Platform,” has students build real-world machine learning models. It takes them from setting up their environment to learning how to create and sanitize datasets to writing distributed models in TensorFlow, improving the accuracy of those models and tuning them to find the right parameters.
As Google’s Big Data and Machine Learning Tech Lead Lak Lakshmanan told me, his team heard that students and companies really liked the original machine learning course but wanted an option to dig deeper into the material. Students wanted to know not just how to build a basic model but also how to then use it in production in the cloud, for example, or how to build the data pipeline for it and figure out how to tune the parameters to get better results. …
It’s worth noting that these courses expect that you are already a somewhat competent programmer. While it has gotten much easier to start with machine learning thanks to new frameworks like TensorFlow, this is still an advanced skill.
The new series is a continuation of Google’s longstanding partnership with Coursera, through which the tech giant went public with its internal IT support training curriculum earlier this year.
Apple made a big move in the battle for top AI talent this week, hiring John Giannandrea away from Google, where he had until Monday been chief of search and artificial intelligence. Apple announced on Tuesday that Giannandrea would lead its machine learning and AI strategy, reporting directly to CEO Tim Cook, the New York Times reported:
Apple has made other high-profile hires in the field, including the Carnegie Mellon professor Russ Salakhutdinov. Mr. Salakhutdinov studied at the University of Toronto under Geoffrey Hinton, who helps oversee the Google Brain lab.
Apple has taken a strong stance on protecting the privacy of people who use its devices and online services, which could put it at a disadvantage when building services using neural networks. Researchers train these systems by pooling enormous amounts of digital data, sometimes from customer services. Apple, however, has said it is developing methods that would allow it to train these algorithms without compromising privacy.
Cook stressed Apple’s commitment to charting a privacy-conscious course on AI development in his statement on Tuesday, saying Giannandrea “shares our commitment to privacy and our thoughtful approach as we make computers even smarter and more personal.” While safeguarding users’ privacy may pose a significant technical challenge in AI and machine learning, that commitment could have an upside from a marketing perspective at a time when tech companies are facing heightened scrutiny and criticism of their data privacy practices.
Google Hire, the search giant’s recruiting and applicant tracking application, has been updated with a new feature called candidate discovery that is designed to help hiring managers more easily keep track of past candidates who might be good fits for newly open positions, Google announced on its blog last Wednesday. According to the company, the new feature enables managers to:
- Find qualified candidates immediately upon opening a job. The first step in filling a role should be checking who you already know that fits the job criteria. Candidate discovery creates a prioritized list of past candidates based on how their profile matches to the title, job description and location.
- Use a search capability that understands what they are looking for. Candidate discovery understands the intent of what recruiters and hiring managers are looking for. It takes a search phrase like “sales manager Bay Area” and immediately understands the skills and experiences relevant to that job title, as well as which cities are part of the Bay Area. That means the search results will include candidates with sales management skills even if their past job titles are not an exact keyword match.
- Easily search by previous interactions with candidates. Hire lets recruiters search and filter based on the previous interactions with the candidate, such as the type of interview feedback they received or whether you extended them an offer before. Candidates with positive feedback will rank higher in search results than those without, and candidates who received an offer in the past but declined it will rank higher than those who were previously rejected.
The feature is now available in beta to all Google Hire users, a pool currently limited to small and mid-sized US employers using its G Suite of enterprise software products. Matt Charney took a more detailed technical look at the product for Recruiting Daily, noting that “traditional search engines are notoriously bad at searching for individual people and profiles,” which may be why it’s taken Google so long to expand into this space. Now that it has, however, it’s a pretty big deal: