Battling algorithmic bias

AI systems were meant to combat unconscious bias in the hiring process, but what they may have done is reinforce the importance of a human touch.

From the pages of In Business magazine.

Imagine applying for a new job as a “content writer.” Your current role as a “content creator” fulfills all of the requirements laid out in the job description, and you like your chances as you submit your resume. Instead, you never even get a call for an interview. What gives?

There’s a chance — likely a better than good chance — that the software the hiring company is using to screen potential candidates never flagged your application for further review. Why? Because the algorithm scanning your resume was never told to search for anyone with the title “content creator.”

That’s just one example of the ways algorithmic biases are impacting the hiring decisions of many companies today.

According to Jobscan, an online tool that gives job seekers an instant analysis of how well their resume is tailored for a particular job, more than 98 percent of Fortune 500 companies utilize some form of applicant-tracking systems, or ATS, to assist with human resources, recruitment, and hiring processes. It’s not just large companies using ATS, either. There are literally hundreds of applicant-tracking systems available to businesses of all sizes, each with different features suited to a company’s needs.

But wait — aren’t these artificial intelligence (AI) systems supposed to help mitigate bias in the recruitment, hiring, and evaluation process as companies work harder than ever to build diverse organizations? Yes, and quite often they do help remove some implicit (unconscious) biases from the process. But machines and algorithms are only as effective as their creators, and because humans aren’t perfect, neither are the AI systems companies are trusting with their hiring decisions.

“I’m in favor of automating the recruiting and hiring process for better job matching, for elevated candidate communications, and for cost and time-saving on both the part of the organization and the candidate,” Coreyne Woodman Holoubek, CHRO, co-founder of Contracted Leadership and president of Disrupt Madison and Disrupt Milwaukee, told IB earlier this year. “Our economy will be vastly improved by incorporating AI and removing some of the human factor.”

Woodman-Holoubek adds that 45 million people in the U.S. change jobs each year, and that figure jumps to 180 to 225 million people worldwide. Every time someone changes jobs, there is a candidate search, a set of interviews, assessments, job testing, background checks, and an enormous amount of effort and time for face-to-face meetings that eventually yield an offer or two depending on the candidate pool. Once a candidate is hired, there is onboarding, which if done properly takes six months to a year. All of these steps can be automated with AI.

However, AI is still only as good as the people programming it. “In the HR tech world, we have a saying regarding the algorithms and learning of the AI — junk in is junk out,” Woodman-Holoubek states. “If we are expecting unbiased hiring from AI right now, we have to closely look at our organizations. We are nowhere near ridding our organizations of bias enough that a machine could learn to be unbiased from us. As well, how do we know that the coder or coders who built the underlying code does not have biases?”

Her comments were echoed in another IB interview with Angela Russell, vice president of diversity, equity, and inclusion for CUNA Mutual Group. With the information technology industry still largely male dominated, the potential for algorithmic bias is no remote possibility. “If you have one group of people just looking at things from one vantage point, you’re going to have bias,” Russell stated. “So, that reinforces the need to have a diverse workplace.”

Unfortunately, there’s no easy solution for eliminating algorithmic bias. But there are methods to cut down on the ways AI bias occurs, both from programming and implementation standpoints. Still, even knowing the problem exists isn’t always enough to prevent bias from happening.

Algorithmic biases are also everywhere, not just in recruiting software. A report from May of this year by the Brookings Institution, a nonprofit public-policy organization based in Washington, D.C., notes that biases have also been found in word associations, online ads, facial recognition technology, and criminal justice algorithms, the latter of which assigned a higher risk score to African-Americans compared to whites who were equally likely to re-offend, resulting in longer periods of detention while awaiting trial for African-American defendants.

Why AI?

There are a few reasons why a company would choose to utilize basic algorithms, applicant-tracking systems, or other AI products, notes Ashlie B. Johnson, PHR, owner of Madison-area Brooke Human Resource Solutions. One primary reason is because the “old-fashioned way” simply takes too long. For businesses that are screening and hiring large numbers of candidates on a regular basis, using algorithmic software can greatly reduce the time recruiters are spending reviewing candidates who clearly do not have the required skills or educational background.

Companies also choose to go with software that can help to manage the recruiting process because of the prevailing belief that ATS removes bias from the selection process. “Computers don’t care what your name is or how your resume is formatted, they simply care if you have the necessary skill to perform the task at hand,” says Johnson. “Many employers also feel that using these systems protects them from claims of discrimination in the hiring process. A word of caution here though — these systems are still built by humans, so if there is bias in the creation of the crit-eria, there will be bias in the outcomes.”

Amazon offers a perfect case study of AI run amok. According to an October 2018 Reuters report, Amazon spent years creating an internal system to automate the recruiting process. The plan was for the company to be able to input a large group of resumes and have the system automatically compile the top candidates. No fuss, no muss. To accomplish this, Amazon provided the system with a decade’s worth of resumes from people applying for jobs at the tech giant.

What Amazon quickly realized was the system was improperly favoring male candidates over females, a symptom of the tech industry being historically dominated by men. When the company noticed the problem, namely that Amazon’s system had taught itself to downgrade resumes with the word “women’s” in them, it took steps to correct the problem by editing the program to be more gender neutral.

Unfortunately, Amazon’s program continued learning based on the information it had been given and concluded that resumes with words like “executed” and “captured,” words apparently used more often in resumes submitted by men, should be ranked more highly. The program was disbanded by Amazon in 2017, and the company said it “was never used by Amazon recruiters to evaluate candidates.”

Amazon isn’t the only major tech company to experience fallout from well-meaning but ultimately poorly executed AI systems. Microsoft famously unveiled a Twitter chat bot named “Tay” in March 2016, only for internet trolls to derail the system in less than 24 hours by training it to send racist, misogynistic, and otherwise offensive tweets out to the world.

Also in 2016, Facebook’s AI systems accidentally declared many of its living users dead, posting memorialization notices on users’ personal pages in what was speculated to be a response to many users who were also supporters of presidential candidate Hillary Clinton receiving condolences on her general election defeat.

Google also got in hot water in November 2017 when it was reported that its Google Translate AI algorithms were sexist. In one example, Google’s translation algorithm took the Turkish single third-person pronoun “o,” which does not mark for gender, and when translating it to English replaced the pronoun with “he” or “she,” producing sexist translations like “he is a doctor,” “she is a nurse,” “he is hard working,” or “she is lazy.”

Nevertheless, Johnson says AI systems can be beneficial to candidates because the systems can often leverage AI to send out faster responses on whether the candidate has been chosen to move forward in the process or has been passed over.

“In some systems, the candidate can also easily get answers to questions about the hiring process and the company as a whole,” notes Johnson. “These systems can also be programmed to create relationships with passive candidates who may have their resume posted on the internet, but who haven’t applied to that company’s position yet. There are new tools on the market, such as Mya, that use machine learning techniques to screen resumes, schedule and conduct interviews, notify candidates of other opportunities based on their skills, and guide them through the steps.”

(Continued)

 

Generally speaking, Johnson says that recruitment can be the most time consuming and tedious task that a busy HR manager might face day to day, and even recruiting firms that do nothing but recruit all day often use AI systems.

“Any tool that can help us automate and streamline that process is usually welcome,” Johnson explains. “However, I do think that there is some fear among HR professionals that this automation is inadvertently weeding out good candidates based on programming and criteria that the HR team is rarely a part of creating. The ability to look at someone’s resume and determine that, while they may not have direct skill correlation to the job posting, they still have other relevant experience or show promise for training on the job, is still uniquely human.”

Here Johnson shares a personal story involving a family member.

“My brother-in-law is a recent college graduate working in the sciences,” she begins. “He applied to a large company but got no response. By chance, I happen to socially know a director at that company and I asked her, as a personal favor to me, if she might be able to find out why he was passed over and what he might do in order to be considered for a future opportunity.

“She spoke with the position manager who said that he had never even seen my brother-in-law’s resume,” Johnson continues. “They then went to the HR department and found out that his resume didn’t pass the initial keyword filter in the company’s applicant-tracking software. After HR and the manager reviewed his resume, they realized that although he had limited experience, his education was applicable, and he was afforded the opportunity to interview. He was ultimately hired and has been successful in his role.

“This is a prime example of how these systems can overlook indirectly qualified candidates, and how knowing the right people can still be very valuable,” adds Johnson.

Hiring for skills, not titles

Changing the way companies think about hiring candidates could be one solution to the algorithmic bias problem.

Cincinnati-based tilr, which launched in Madison just over a year ago, offers a program that connects temporary, project, and temp-to-perm workers to jobs using an algorithm focused only on skills.

“tilr’s mission is to eliminate bias in recruitment, and our vision is to close the global skill gap,” says Carisa Miklusak, CEO. “Titles have been the fundamental building block of recruitment throughout history, and it’s time for that to change. Whereas titles screen people out, skills screen people in.”

According to Miklusak, tilr seeks to combat bias in recruitment by teaching the company’s algorithms to ignore things like gender, race, sexual orientation, disability, and veteran status, and instead to focus purely on filling positions based on a person’s skills, availability, travel preferences, and interests.

“Bias developing in algorithms is a real concern if algorithms are left to learn on their own,” notes Miklusak. “We’ve seen this impact Amazon’s efforts in algorithmic hiring, and we see it in everyday activities like the automatic sink not picking up that you’re there because you’re wearing black. Interestingly, what’s required to combat bias in algorithms is human analysis, which is the very thing that has traditionally led to bias. Fair code requires very diverse testing sets, codes, or monitors that factor fairness into coding, as well as constant monitoring of the machine-learning efforts.”

“If a combined effort is possible, I think it is likely the best solution,” notes Johnson. “Using algorithms to weed out clearly unfit candidates, but still allowing human eyes to review those resumes that might have indirect potential is ideal, in my opinion. If a business is interested in utilizing this technology to reduce bias, it is also possible for applicant-tracking systems to initially screen and then provide the approved resumes to managers without names, addresses, or dates that might influence bias. I think that this can provide the best of both worlds in the recruitment process.”

Programming particulars

For Johnson, one way to help improve AI recruitment algorithms would be to have the people who do the hiring more involved in the process of writing the code.

“I do think that there is always room to leverage the experience of professionals to make programming better,” Johnson says. “However, at the end of the day, these systems are built by humans and are therefore always subject to inadvertent, or in the case of true AI systems, learned bias.”

Mitigating bias is core to tilr’s reliability with applicants and hiring companies, says Miklusak, and the company has taken great steps to eradicate bias in not just recruitment, but also in its algorithms.

For tilr, it begins by assembling a diverse team featuring people with very different backgrounds, experiences, and perspectives. “We ask this team to look for bias and keep thoughtful development at the forefront of our priorities,” explains Miklusak. “Next, as a core development philosophy, we do not look at traditional demographics for analysis or growth strategy. Rather, we study behavioral factors that increase our ability to fulfill client demand, such as attendance, skills gained, shifts worked, and performance ratings. Finally, we monitor machine learning and decisions, constantly applying critical thinking and diverse analysis to approval.

“The largest challenge across all industries is making certain that algorithms and machine learning are monitored by diverse groups with social good intentions,” Miklusak continues. “Just ensuring they are monitored period, and not left to learn and decide on their own, is critical to so many areas of our life. Think about the insurance or loans we are offered, whether we get into a school, and, of course, if we get a job.”

Johnson says the first step to mitigating hiring bias is to assess the recruitment team, and then engage in training and other activities to reduce bias from the human side. Using pro-diversity language in your job descriptions can also be very helpful in attracting a diverse pool of candidates from which to choose, notes Johnson. Another very important step is to develop a scoring or evaluation system that deals only in hard facts like time worked in a position, education, and skill with a particular tool.

“Giving candidates numbers and removing any age, gender, or other information that could influence the hiring team can also help remove bias,” says Johnson. “Develop your hiring team from a diverse pool, as well. Having a diverse team can combat unconscious bias, and it also helps uncover and fix any blind spots in the interviews and eliminates decisions made on ‘gut feeling.’ Developing a set of interview questions that is scripted and presented to each candidate in the same way also balances the field for candidates.”

Ultimately, Johnson doesn’t believe that AI systems themselves are to blame for hiring inconsistencies. “I think they can be very useful in the search for qualified candidates,” she explains. “The improvement needs to come from the users of these systems. If your company utilizes an applicant-tracking system that is eliminating candidates based on an algorithm, make sure you understand the criteria that that algorithm is using. Taking time to truly understand the technology behind the curtain is key to using the power of these systems to your best benefit.

“Work with your hiring team to understand what truly is a ‘must-have’ skill for a candidate and what might be learned on the job, then use the system to make sure that the items that are not ‘must have’ are not being filtered out. This will allow a larger pool of candidates to come through.”

If an employer truly fears that their system is being more of a hinderance than a help, Johnson recommends taking steps to test that theory. Let the system review resumes and then have the hiring team review all of the same resumes and see if they come up with the same pool of candidates. If not, investigate the differences and how they can be mitigated.

(Continued)

 

Beating the bots

According to online resume resource Jobscan, most companies today use some kind of applicant-tracking systems (ATS) that can scan the content of a resume to make it searchable during the hiring process.

Some systems, like Taleo, will even automatically filter and rank applicants based on the job description, meaning highly qualified applicants might get overlooked if their resume isn’t optimized with the right keywords.

With the ATS acting as a gatekeeper, candidates can get their foot in the door by tailoring their resume not just to the specific job for which they’re applying, but also to the specific job description.

What does that mean? For starters, candidates should pay careful attention to the language and specific words used often throughout a job description, and then mirror that in their own resume. Jobscan notes the levels of sophistication vary between ATS, but most cannot differentiate between synonyms, abbreviations, or similar skills, so even slight variations can cause a resume to be passed over by the AI.

Jobscan offers the following example: “If you’re a graphic designer, you probably have a lot of experience with Adobe Creative Cloud, Adobe’s software bundle that includes standards like Photoshop and Illustrator. But say the job description mentions only ‘Adobe Creative Suite,’ its former name.

“If you have ‘Adobe Creative Cloud’ on your resume but the hiring manager searches the ATS for ‘Adobe Creative Suite,’ you could be excluded from the results even though you possess the exact skillset they’re looking for. In this example, you want to optimize your Adobe experience by changing the resume keyword to ‘Adobe Creative Suite.’”

Similarly, it’s OK to edit your job titles to fit the terms used in the job description, so long as it’s not an outright lie. As Jobscan explains, “There’s nothing wrong with changing your ‘official’ job title on your resume. All you’re doing is translating past experience into the same language as a hiring company. This is what optimizing your resume keywords is all about.”

Finally, while some ATS recognize tenses, plurals, and other word variations, most only find exact matches. So, if a recruiter searches for “project manager,” your resume won’t make it to the top of the pile if it only includes the phrases “managing projects,” “project managed,” and “project management.”

Notes Jobscan, “The best practice for determining which tense or form to use with your resume keywords is to mirror the job description. If ‘manager’ is used frequently in the job description but your resume says, ‘Managed team of 11 engineers,’ simply rewrite it to say, ‘Manager to a team of 11 engineers.’”

Click here to sign up for the free IB ezine — your twice-weekly resource for local business news, analysis, voices, and the names you need to know. If you are not already a subscriber to In Business magazine, be sure to sign up for our monthly print edition here.