As digital technologies become more prominent in how organizations work, employers are balancing the need for employees with digital and other hard skills with the need for employees with “soft” social, interpersonal, and communication skills. In fact, employers are increasingly prioritizing social and emotional skills; McKinsey, for example, predicts that skills such as communication, pattern recognition, logical reasoning, and creativity will be in high demand in the coming decades.
With these soft skills in high demand, Jake Bullinger proposed in a recent article at Fast Company that for-profit organizations consider hiring trained social workers to fill that need. Bullinger talks to Michàlle Mor Barak, a University of Southern California social work professor, who notes that companies today require expertise in societal good as they are increasingly under pressure to prioritize things like corporate social responsibility, work-life balance, and diversity and inclusion which weren’t on their radar a few decades ago. Social workers and other experts in social and emotional issues could be particularly helpful in people management and community engagement, Bullinger writes:
A human resources department staffed with therapists could better handle harassment claims, and recruiters working with social scientists could better target minority candidates. Corporate philanthropy arms would benefit, one can surmise, from case workers who understand a community’s greatest needs. The people best suited to run diversity and inclusion efforts might be those who study diversity and inclusion for a living.
I graduated with a master’s degree in social work in 2005 and have spent most of my career working in for-profit organizations. From my vantage point, social workers can provide an array of benefits, but organizations need to be realistic about what they can and can’t do.
The high monetary costs of having children are well known to working parents and the employers looking to support them. According to US Census data, child care costs skyrocketed by more than 50 percent in inflation-adjusted dollars between 1985 and 2011. These costs have been blamed for holding women back in the workforce by making it challenging for couples to start families without scaling back one of their careers: in the case of heterosexual couples, that usually means the woman’s, as she typically earns less money than her male partner.
Yet a study recently highlighted in the Wall Street Journal suggests that the total costs of motherhood are difficult for many working women to anticipate. “The Mommy Effect: Do Women Anticipate the Employment Effects of Motherhood?” by economists Jessica Pan, Ilyana Kuziemko, Jenny Shen, and Ebonya Washington finds that some women in their childbearing years have “misplaced optimism” about their employment prospects after becoming mothers due to other hard-to-quantify costs associated with having children. As the Journal noted, a recent US government survey found that 64 percent of women with bachelor’s degrees and children under the age of six agreed that “being a parent is harder than I thought it would be”; fewer than 40 percent of similarly situated men agreed.
Beyond the financial costs are the time and emotional “costs” associated with having children that are harder to plan for.
The digital age has its pros and cons for the workforce. Technology provides employees with faster, easier access to information and data. It also allows for greater personalization and more interaction between employee and employer. Yet the digitalization of the workplace does have its downsides. Consider smartphones, for example: They can be alternately distracting and distressing; they can create barriers to action like information overload and decision fatigue, as well as work-life balance issues stemming from an “always-on” mentality.
Some managers, frustrated with the ubiquity of these devices and their ability to distract employees, are banning phones from meetings or otherwise limiting their use in the workplace, the Wall Street Journal’s John Simons wrote in a feature last week. Simons points to studies indicating that executives and managers consider smartphones “the leading productivity killers in the workplace” and that the presence of a phone can harm people’s cognitive performance, even when they are not using or holding it. He also notes Google’s recent announcement that the next version of its Android operating system will introduce a feature enabling users to see how much time they spend on their phones, which apps they use the most, and how often the phone gets unlocked.
Our recent research at CEB, now Gartner, also underscores these downsides of technology at work. While solutions to help employees minimize time wasted on tech, like Google’s forthcoming Android time tracker, might be helpful, our research suggests that no technological intervention can have a meaningful impact on employee performance or the employee experience by itself. The limitations are striking, given the large investments organizations (and HR functions in particular) are making in technology to support employees. But the challenges employers face are human and organizational, not just technological—and the same must be true of any solution.
Earlier this month, Jeffrey Immelt was replaced as CEO of General Electric after 16 years at the helm of the company. Much of the coverage has depicted Immelt’s stepping down as a result of investors losing confidence in his leadership after GE’s stock underperformed in the past year, as in this Bloomberg report, for example:
Amid mounting pressure from activist investor Trian Fund Management, GE said Monday that Immelt will be replaced by John Flannery, a 30-year company veteran who oversaw a jump in profits at the health-care unit. In a sign of just how great opposition to Immelt had become in the investing community, the stock soared the most in more than a year and a half after the announcement was made.
This was not a snap decision by GE’s board of directors, however. In fact, the planning for Immelt’s succession began not in 2016 or 2015, but all the way back in 2011. Susan Peters, Senior Vice President for Human Resources at GE, shared the company’s strategy in a LinkedIn post, illustrating a thoughtful process befitting a giant corporation responsible for hundreds of thousands of employees and hundreds of billions of dollars in assets:
First, we knew it would take years to move potential candidates through the leadership roles that would develop them. We began intentional moves of key leaders to give them new, stretch experiences with ever increasing exposure to complexity.
By 2012, we wrote the job description and then continuously evolved it. We focused on the attributes, skills and experiences needed for the next CEO, based on everything we knew about the environment, the company’s strategy and culture.
Ask any three people to define who they are talking about when they refer to “millennials,” and you’re apt to get at least four different answers. Generational boundaries are always somewhat blurry, but this generation seems particularly hard to pin down. Demographers usually define millennials as everyone born between the early 1980s and the late 1990s, or sometimes the early 2000s, but the lines are not hard and fast.
Meanwhile, the difference in life experience between a person born in 1982 and one born in 1998 is substantial: For one thing, the former remembers a time before the Internet, while the latter is a genuine digital native. For another, the older millennial was already in the workforce during the financial crisis of 2008, while the latter was still in grade school. Those are some of the key differences that lead Jesse Singal at Science of Us to wonder whether the millennials are not in fact two distinct generations, which he dubs Old Millennials and Young Millennials:
Old Millennials, as I’ll call them, who were born around 1988 or earlier (meaning they’re 29 and older today), really have lived substantively different lives than Young Millennials, who were born around 1989 or later, as a result of two epochal events that occurred around the time when members of the older group were mostly young adults and when members of the younger were mostly early adolescents: the financial crisis and smartphones’ profound takeover of society. And according to Jean Twenge, a social psychologist at San Diego State University and the author of Generation Me: Why Today’s Young Americans Are More Confident, Assertive, Entitled—and More Miserable Than Ever Before, there’s some early, emerging evidence that, in certain ways, these two groups act like different, self-contained generations.
One of these differences concerns the stereotype of the millennial job hopper, which persists despite not really being borne out in labor market data. Singal, who was born in 1983, adds that his life experience and that of his contemporaries don’t fit that narrative:
Even though so-called “gig workers” are not generally entitled to the same benefits and perks as regular employees, as contingent labor makes up a greater portion of the workforce, many employers are concerned about how they will provide benefits like health insurance or retirement savings plans to this new and different type of worker, for whom many existing benefit systems and regulations do not account. Companies like Uber and Care.com, whose business models depend on drivers or caregivers being classified as independent contractors rather than employees, have been experimenting since last year with ways to deliver retirement and health insurance benefits to those who are employed through their platforms, but not by them.
This issue is also entering discussions about public policy: Last September, the online handicraft retailer Etsy published a proposal imagining a new form of “social safety net” for gig workers, and recommending a series of policy changes to that end, and since last year, New York State has been developing legislation that would establish a model for gig economy workers to receive portable benefits while remaining independent contractors under state law. In the meantime, Mark Feffer at SHRM offers employers some suggestions on how to give contingent workers benefits without running the risk of causing them to be reclassified as full-time employees:
Experts say there are two central elements to fashioning a benefits package that will attract the best gig workers with minimal risk to the company:
- First, understand that independents consider more than money when deciding whether or not to take an assignment.
- Second, make sure whatever you offer is portable, something the worker can access even after his or her assignment has ended.
Vasin Lee / Shutterstock
Employee monitoring technology is looking more and more like the wave of the future for organizations looking to maximize their workforce’s efficiency and productivity. A growing number of vendors are now offering monitoring systems that use a complex series of badges or sensors, along with advanced data processing, to track employees’ movements in the office, their activity, their communication, and even their emotional state. At Bloomberg, Rebecca Greenfield discusses the implications of this emerging technology for employees’ privacy:
Legally speaking, U.S. businesses are within their rights to go full-on Eye of Sauron. “Employers can do any kind of monitoring they want in the workplace that doesn’t involve the bathroom,” says Lewis Maltby, president of the National Workrights Institute. And as long as the data is anonymized, as Enlighted’s is, some people don’t mind tracking if it makes work life easier. “It doesn’t bother me. It doesn’t feel intrusive,” says Luke Rondel, 31, a design strategist at Gensler. “It’s kind of cozy when you’re working late at night to be in a pod of light.” A majority of U.S. workers the Pew Research Center surveyed last year said they’d tolerate surveillance and data collection in the name of safety.
Research CEB conducted last year also shows that relatively few employees consider it unacceptable for their employers to collect this kind of data. For example: