Bookmark and Share Email this page Email Print this page Print Pin It
Feed Feed

Battling algorithmic bias

AI systems were meant to combat unconscious bias in the hiring process, but what they may have done is reinforce the importance of a human touch.

(page 1 of 3)

From the pages of In Business magazine.

Imagine applying for a new job as a “content writer.” Your current role as a “content creator” fulfills all of the requirements laid out in the job description, and you like your chances as you submit your resume. Instead, you never even get a call for an interview. What gives?

There’s a chance — likely a better than good chance — that the software the hiring company is using to screen potential candidates never flagged your application for further review. Why? Because the algorithm scanning your resume was never told to search for anyone with the title “content creator.”

That’s just one example of the ways algorithmic biases are impacting the hiring decisions of many companies today.

According to Jobscan, an online tool that gives job seekers an instant analysis of how well their resume is tailored for a particular job, more than 98 percent of Fortune 500 companies utilize some form of applicant-tracking systems, or ATS, to assist with human resources, recruitment, and hiring processes. It’s not just large companies using ATS, either. There are literally hundreds of applicant-tracking systems available to businesses of all sizes, each with different features suited to a company’s needs.

But wait — aren’t these artificial intelligence (AI) systems supposed to help mitigate bias in the recruitment, hiring, and evaluation process as companies work harder than ever to build diverse organizations? Yes, and quite often they do help remove some implicit (unconscious) biases from the process. But machines and algorithms are only as effective as their creators, and because humans aren’t perfect, neither are the AI systems companies are trusting with their hiring decisions.

“I’m in favor of automating the recruiting and hiring process for better job matching, for elevated candidate communications, and for cost and time-saving on both the part of the organization and the candidate,” Coreyne Woodman Holoubek, CHRO, co-founder of Contracted Leadership and president of Disrupt Madison and Disrupt Milwaukee, told IB earlier this year. “Our economy will be vastly improved by incorporating AI and removing some of the human factor.”

Woodman-Holoubek adds that 45 million people in the U.S. change jobs each year, and that figure jumps to 180 to 225 million people worldwide. Every time someone changes jobs, there is a candidate search, a set of interviews, assessments, job testing, background checks, and an enormous amount of effort and time for face-to-face meetings that eventually yield an offer or two depending on the candidate pool. Once a candidate is hired, there is onboarding, which if done properly takes six months to a year. All of these steps can be automated with AI.

However, AI is still only as good as the people programming it. “In the HR tech world, we have a saying regarding the algorithms and learning of the AI — junk in is junk out,” Woodman-Holoubek states. “If we are expecting unbiased hiring from AI right now, we have to closely look at our organizations. We are nowhere near ridding our organizations of bias enough that a machine could learn to be unbiased from us. As well, how do we know that the coder or coders who built the underlying code does not have biases?”

Her comments were echoed in another IB interview with Angela Russell, vice president of diversity, equity, and inclusion for CUNA Mutual Group. With the information technology industry still largely male dominated, the potential for algorithmic bias is no remote possibility. “If you have one group of people just looking at things from one vantage point, you’re going to have bias,” Russell stated. “So, that reinforces the need to have a diverse workplace.”

Unfortunately, there’s no easy solution for eliminating algorithmic bias. But there are methods to cut down on the ways AI bias occurs, both from programming and implementation standpoints. Still, even knowing the problem exists isn’t always enough to prevent bias from happening.

Algorithmic biases are also everywhere, not just in recruiting software. A report from May of this year by the Brookings Institution, a nonprofit public-policy organization based in Washington, D.C., notes that biases have also been found in word associations, online ads, facial recognition technology, and criminal justice algorithms, the latter of which assigned a higher risk score to African-Americans compared to whites who were equally likely to re-offend, resulting in longer periods of detention while awaiting trial for African-American defendants.

Why AI?

There are a few reasons why a company would choose to utilize basic algorithms, applicant-tracking systems, or other AI products, notes Ashlie B. Johnson, PHR, owner of Madison-area Brooke Human Resource Solutions. One primary reason is because the “old-fashioned way” simply takes too long. For businesses that are screening and hiring large numbers of candidates on a regular basis, using algorithmic software can greatly reduce the time recruiters are spending reviewing candidates who clearly do not have the required skills or educational background.

Companies also choose to go with software that can help to manage the recruiting process because of the prevailing belief that ATS removes bias from the selection process. “Computers don’t care what your name is or how your resume is formatted, they simply care if you have the necessary skill to perform the task at hand,” says Johnson. “Many employers also feel that using these systems protects them from claims of discrimination in the hiring process. A word of caution here though — these systems are still built by humans, so if there is bias in the creation of the crit-eria, there will be bias in the outcomes.”

Amazon offers a perfect case study of AI run amok. According to an October 2018 Reuters report, Amazon spent years creating an internal system to automate the recruiting process. The plan was for the company to be able to input a large group of resumes and have the system automatically compile the top candidates. No fuss, no muss. To accomplish this, Amazon provided the system with a decade’s worth of resumes from people applying for jobs at the tech giant.

What Amazon quickly realized was the system was improperly favoring male candidates over females, a symptom of the tech industry being historically dominated by men. When the company noticed the problem, namely that Amazon’s system had taught itself to downgrade resumes with the word “women’s” in them, it took steps to correct the problem by editing the program to be more gender neutral.

Unfortunately, Amazon’s program continued learning based on the information it had been given and concluded that resumes with words like “executed” and “captured,” words apparently used more often in resumes submitted by men, should be ranked more highly. The program was disbanded by Amazon in 2017, and the company said it “was never used by Amazon recruiters to evaluate candidates.”

Amazon isn’t the only major tech company to experience fallout from well-meaning but ultimately poorly executed AI systems. Microsoft famously unveiled a Twitter chat bot named “Tay” in March 2016, only for internet trolls to derail the system in less than 24 hours by training it to send racist, misogynistic, and otherwise offensive tweets out to the world.

Also in 2016, Facebook’s AI systems accidentally declared many of its living users dead, posting memorialization notices on users’ personal pages in what was speculated to be a response to many users who were also supporters of presidential candidate Hillary Clinton receiving condolences on her general election defeat.

Google also got in hot water in November 2017 when it was reported that its Google Translate AI algorithms were sexist. In one example, Google’s translation algorithm took the Turkish single third-person pronoun “o,” which does not mark for gender, and when translating it to English replaced the pronoun with “he” or “she,” producing sexist translations like “he is a doctor,” “she is a nurse,” “he is hard working,” or “she is lazy.”

Nevertheless, Johnson says AI systems can be beneficial to candidates because the systems can often leverage AI to send out faster responses on whether the candidate has been chosen to move forward in the process or has been passed over.

“In some systems, the candidate can also easily get answers to questions about the hiring process and the company as a whole,” notes Johnson. “These systems can also be programmed to create relationships with passive candidates who may have their resume posted on the internet, but who haven’t applied to that company’s position yet. There are new tools on the market, such as Mya, that use machine learning techniques to screen resumes, schedule and conduct interviews, notify candidates of other opportunities based on their skills, and guide them through the steps.”

(Continued)

Add your comment:
Bookmark and Share Email this page Email Print this page Print Pin It
Feed Feed
Edit ModuleEdit ModuleShow Tags

Events Calendar

Edit ModuleEdit Module
Edit Module