Skip to main content

AI in Hiring: Friend or Foe?

Analysis  |  By Delaney Rebernik  
   March 06, 2024

Don’t let unchecked tech prune your most promising candidates before their application even crosses a human's desk.

AI is making a splash in talent pools. Or is it a belly flop?

According to a February report from Rutgers University's Heldrich Center for Workforce Development, 71% of workers are concerned about the impact of artificial intelligence (AI) on jobs and just as many are worried about employers using the tech in hiring and promotion decisions.

That means HR leaders who want to attract and keep top talent must champion the ethical application of emerging tools in People domains.

It's a task that's easier said than done as implementation pressures mount, governance efforts evolve or sputter, and we grapple with past wrongs shaping our future ways of working.

"We've had unconscious bias for so long and have underestimated women, people of color, people with disabilities for decades in the workplace," says Hilke Schellmann, assistant professor of journalism at New York University and author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. "Humans have caused massive damage, so we don't want to go back to that."

In hiring, this can play out like an AI-powered applicant tracking system (ATS) scoring candidates based on irrelevant and potentially discriminatory application details that it mistakes for predictors of success, such as the name "Thomas" or hobby "baseball" appearing on a resume (both real examples from Schellmann's research).

Legal and ethical implications aside, failing to mitigate such risks can mean your AI tool does the exact opposite of what you're intending: Weed out your most promising candidates before their application crosses a human's desk.  

"AI can be a powerful addition to any recruitment team's arsenal … but I think it's equally important who you choose to do business with and why you choose to do business with them," says John Higgins, vice president of talent management at Essentia Health, which has 14 hospitals and more than 15,000 employees in Minnesota, Wisconsin, and North Dakota. 

Higgins has partnered with several third-party vendors to introduce AI-enabled chatbot and programmatic advertising capabilities to accelerate the hiring process and enhance the candidate experience. "Ensuring that [vendors are] eliminating bias is part of the way in which those tools are supporting you. Because the last thing you want to do is make it more difficult, especially in healthcare, to find the right talent."

How AI shows up in the talent space

Nearly all Fortune 500 companies use an ATS. (In 2023, 42 healthcare organizations made the Fortune list.)

Often outfitted with AI and built either in-house or by a vendor, the software helps move candidates through the hiring process by storing their information and materials; tracking the status of applications; and automating time-intensive tasks such as screening for fit, reading resumes, scheduling interviews, and sending out notifications.

Additionally, AI is increasingly used to prescreen candidates before they make it to a conversation with a human recruiter through tasks such as gamified assessments and one-way interviews with pre-recorded prompts, both completed from computer or phone, Schellmann says.

Chatbots are attractive for teams who look after talent, both current and prospective, because they can help manage work influxes stemming from acquisitions, mergers, and notorious busy seasons. Open enrollment, for example, is the "Super Bowl for HR," says Ayanna Pierce, vice president of benefits and talent relations at Mercy, a St. Louis, Missouri–headquartered health system with more than 50 hospitals and 50,000 co-workers across four states.

To help with such upticks—and replace a chatbot that was retired at the same time as their previous benefits system—Pierce's team collaborated with their tech counterparts to launch Joy in late February, just in time to help with the winddown of open enrollment. Within a week, the benefits chatbot had answered 1,200 questions, a number that's expected to keep climbing as news of the launch spreads. 

"We have so many benefits at Mercy that it can be overwhelming for co-workers," Pierce says. "We communicate a lot and so to have a chatbot that can synthesize the information and help people get what they need when they want it is huge, and it's a satisfier for our co-workers."

Essentia Health partnered with vendor Paradox to launch their recruiting-focused bot, Olivia, which appears as a widget on the system's career website and moves candidates through the hiring process by answering questions in real-time, at any time—including when human recruiters are sleeping—about things such as benefits, company culture, and specific roles, Higgins says.

Once candidates are engaged, the bot can help them complete applications for many roles through a simple text conversation rather than any formal submission process. And, because the bot integrates with Essentia Health's instances of Workday and Office 365, it can schedule a phone interview with a recruiter or hiring leader. All told, it makes things "easier" and more "delightful," Higgins says.

How AI can help

Chief among the goals of using AI, talent leaders tell HealthLeaders, is creating resonant experiences for candidates and employees by enhancing—not replacing—the talent team's capabilities.

Recruiting "can be really administrative heavy," Higgins says. Offloading tasks such as scheduling frees up time for recruiters to "be more consultative" by "nurturing their relationships with those candidates and partnering with those hiring leaders to make better hiring decisions."

Beyond empowering teams to work "at the top of their license," Essentia Health uses AI to make things fast for candidates. "It's all about … getting them in front of us more quickly because in today's world—especially in rural healthcare—speed wins."

Because Essentia Health is a rural institution, Higgins also uses AI in a way that flouts prevailing narratives: to widen, rather than narrow, the talent funnel. "The concept of screening out candidates is not something that we want necessarily a technology to do for us," Higgins says. "We're looking more for, how can we screen candidates in?"

To that end, his team uses AI-powered programmatic advertising to mete out marketing dollars swiftly and strategically across all available roles, which can sometimes reach upward of 1,000 at once. "It's monitoring the number of applications that are coming in, and once it hits certain thresholds of, 'hey, you've got plenty of candidates for this job,' it stops funding the advertising for that particular element and shifts the dollars to jobs that aren't receiving applicants," Higgins explains. "It puts dollars where you need them to generate more candidate flow and make sure you're not overspending" on jobs that prove easier to fill.

The AI investments have paid off, Higgins says. Essentia Health is now scheduling more than twice as many interviews and hiring 20% more people, he estimates. Plus, they've reduced the average time from application to an interview with the hiring leader from 10 or 15 days to five.

How AI can hurt

The risks in AI are well documented. Among them: Hallucinations that confidently proffer incorrect information; encoded bias that amplifies the assumptions of the tech's human creators and perpetuates historical disparities reflected in training data; and privacy, copyright, consent, and labor issues stemming from whose work, art, or personal information is used to build the tech. Such pitfalls can translate to hiring misfires in several ways, according to Schellmann's research.

1. Using biased proxies for success

Calibrating tools using data solely from people who are or have been in the roles you're looking to fill can amplify bias that has historically existed and shown up as underrepresentation of marginalized groups such as women, people of color, and disabled people within positions of leadership or positions, period, within the organization.

"We see this again and again," Schellmann says. "These biased proxies come into play even though, you know, it may look facially neutral."

For her book, she talked to lots of employment lawyers and industrial organizational (IO) psychologists who are often brought into the AI technology vetting process for due diligence.

"What some of them have found is biased keywords, broadly speaking, in resume screeners," she says. In one screener, for example, the name "Thomas" was a "proxy for success." So those who had that word on their resume (e.g., as part of their name) "got more points." Ditto for the hobby "baseball" as compared to softball, and the countries "Syria" and "Canada" compared to their counterparts around the world.  

"What this points to is problems in the training data and the way these tools are calibrated and the way these tools are monitored," Schellmann says. "There is no supervision" as the system keeps learning what success looks like from biased proxies even as organizations attempt to diversify their candidate pool.

"This is really concerning for employment lawyers," she says, because a court might consider such issues discrimination based on protected classes (e.g., national origin or gender).

And, what's more, because people are not a monolith, representation isn't always enough, especially in the realm of disabilities, which "express themselves so individually," Schellmann explains. For example, even if an autistic person is part of the training data, "that doesn't mean that their data would actually map onto the next autistic person," she says. "There's a real problem here, how a one-size-fits-all tool that is being used for hiring would work on people whose disability is expressed so individually and who are often not part of the training data, so I think that's a huge question that I'm not sure companies have started to grapple with."

2. Not measuring what you think you're measuring

A similar type of bias can play out in gamified assessments by elevating candidates who display similar behaviors, regardless of their relevance to success on the job—or whether style of gameplay even translates to the real world.

Typically, candidates play through a set of simple tasks such as using the spacebar on their computer keyboard to pump up balloons as quickly as possible to collect money. To determine what constitutes success, the organization has people already in the given role play the game to identify what sets them apart from the general population. An employer might determine, for example, that their current accountants are comparatively bigger risk takers. And so the tool would start moving applicants who display that behavior during gameplay into the "yes" pile.

Using an assessment that relies on dexterity and speed to determine fit for a job that doesn't have those same requirements poses risks for discrimination, Schellmann says.

When she tested one such game with someone who has a physical disability, "he was really worried that, if people had a motor disability in the hands, that maybe they couldn't hit the spacebar as fast as possible, they would get rejected for the job, although they could absolutely do the job," she recalls. "Are we excluding people who may have a motor impairment or another disability for no reason?"

And though U.S. companies must offer reasonable accommodations, a common challenge among the disabled job seekers Schellmann interviewed is, "they don't necessarily know what awaits them on the other side" of a "start assessment" button and thus what accommodation they might need.

Aside from the discrimination risks, being a "daredevil" in a video game doesn't necessarily translate to being a risk taker in real life and on the job, Schellmann explains.

One IO psychologist she spoke to described how a hospital found their most successful nurses were "way more compassionate than the general population." So the thought process for an organization looking to translate that finding to a gamified assessment might be, "okay, maybe we need to find people who are very compassionate, more compassionate than the general population, so we need to have a game that finds that," she says.

The problem is, when the IO psychologist went on to test the unsuccessful nurses, they, too, were far more compassionate than the general population. In other words, compassion set nurses apart, but it's not what made them successful. "Turns out, the successful nurses were more conscientious," Schellmann recalls.

How to make it work

This is the moment for People leaders "to feel very empowered," Schellmann says. It means asking tough questions early and often. "We have to be a whole lot more skeptical of these tools and that starts with asking very skeptical questions and not just believing the marketing language that vendors sell."

Go for the goal: To avoid succumbing to shiny-object syndrome, start by defining a goal and then looking for vendors and tech that can help achieve it.

For Essentia Health, that meant making the job search faster, easier, and more satisfying than what they could offer with their ATS' out-of-the box capabilities.

It also meant meeting candidates where they're at: about 70% of visitors to Essentia Health's job site get there on their phones, so "if we rely on solutions that are PC-based, we lose," Higgins explains. It's why they've opted for a chatbot with robust SMS texting functionality for app completion.

For their part, Mercy looked for places where they could make swift, safe enhancements to the colleague experience. Launching Joy in the benefits space made sense because it presented a big opportunity with relatively low risk compared to a clinical use case. "We felt like it was something that we could do pretty accurately and that there was a benefit to our co-workers," says Kerry Bommarito, PhD, vice president of enterprise data science at Mercy.

Plus, the tech they had available at the time of the build—GPT-3.5 within Microsoft Azure— "allowed us to explore that branch of AI in a protected environment with no risk of protected health information (PHI) exposure or personal data being shared," she explains. They're now setting sights on newer GPT models for clinical use cases.

Lead with values: Make sure vendors and external partners share "the ethos of your organization," Higgins advises. Essentia Health focuses much of its programmatic advertising on Indeed.com not only because of the job aggregator's marketplace dominance—about 40% to 50% of Essentia Health's applicants come from it—but also because the organizations have similar stances on AI accuracy, bias mitigation, and relevance in matching candidates to jobs, as well as social justice and DEI. Both, for example, emphasize fair-chance hiring for people who've previously been incarcerated or convicted of an offense. "The work they're doing from a charter, mission perspective aligns really well," Higgins says. "So we choose to partner with them versus maybe choosing to partner with other organizations that don't have … those kinds of deep commitments in place."

Start small: Consider a focused test before committing wholesale, Schellmann advises. "Can we do a pilot program where we don't make higher stakes decisions yet, but we test the tool in the background?" Also consider the "question of consent," by letting employees and job seekers know what from their data, and how much of it, is being used to train AI systems, she adds.

Get technical: Both Mercy and Essentia Health vetted vendors for safety and fit with their existing People technology ecosystems. "We do incredibly deep dives on things like IT security when we're looking at vendors" to ensure they meet internal privacy protection and compliance standards, Higgins says.

Also ask for a vendor's technical report, which shows how they built and validated the tool, including any steps they took to test for bias, Schellmann advises. If they don't have one, "that's a red flag."

Within the report—or the internal equivalent for tools built in house—look at the size and demographics of the test group and how that all matches up with your current and prospective employees. "Sample size matters," Schellmann says. "It also matters, you know, were they diverse enough? Was there a wide plethora of people of different ages, abilities, demographic backgrounds, genders?"

Don't stop at individual EEOC protected classes; really probe how the tool treats the data of people with multiple marginalized identities. "We see a lot of tools start discriminating when you look at white men versus, for example, African American women," Schellmann says. "We don't see really a lot of scrutiny on that front."

Break things: HR and talent leaders can "test these tools before they're being implemented," Schellmann says. You don't need a technical background. And, in fact, if you can break a tool without one, that's a bad sign. "If I can test these tools and they break on impact because I speak a different language to them or deep fake my voice and just type out answers, and that is not being detected, and I get a result," it means the tool might not work the way it's advertised, she explains.

Get guardrails: To supplement the content filters that Microsoft's open AI service has in place to prevent inappropriate questions and answers, Bommarito's team programmed Joy using "retrieval-augmented generation." It means the bot only addresses questions that it can answer using information in Mercy's benefits documents. "It does minimize the risk of providing inaccurate information," Bommarito explains. When someone asks a question that's not benefits related, Joy is instructed to tell the co-worker that it doesn't know the answer and, as relevant, provide contact information for a live team member.

"They really did take steps to make sure that Joy won't go rogue," Pierce says.

This was a vital consideration because even though benefits may be lower stakes than PHI, they aren't without risks. Take, for example, someone who may access Joy when choosing between Mercy's two plan options for short-term disability. "We want to make sure co-workers don’t feel forced into an election that doesn’t work for them," Pierce explains. "And so they have definitely taken the liberties of vetting the process to make sure that Joy gives informative information but isn't making decisions."

Train your people: Don't assume that everyone is excited about AI, let alone well practiced in tools like ChatGPT. "Guiding people in how to phrase questions is important," Bommarito says. She's teaching colleagues that, for the best results, they should talk with Joy like they would with a talent colleague on the phone and ask follow-up questions as needed to gain clarity or context. "You're not just going to type in the word 'speech therapy.' You're going to say, 'how many speech therapy visits are covered by this specific insurance plan?'" Bommarito says. Her team has created tipsheets, FAQs, and other resources to help colleagues get the hang of it. 

Part of this training is cueing people to the tech's limitations.

"Joy won't be able to solve everything," Pierce says. "There are things that take human empathy and compassion to deal with the situation and make thoughtful decisions. You know, HR is not black and white. We're gray, and that's our area of expertise. We deal in the gray, and you need people to deal in the gray."

Delaney Rebernik is a freelance editor for HealthLeaders.


KEY TAKEAWAYS

Many workers are concerned about how AI will affect their jobs—and for good reason, given the risks seen "again and again," a workplace AI expert tells HealthLeaders.

To ensure the talent tech you're using doesn't exacerbate disparities or do the exact opposite of what you hope it will, People leaders say to start small, ask tough questions, and test rigorously.


Get the latest on healthcare leadership in your inbox.