AS ORGANIZATIONS AROUND THE WORLD WORK TO ELIMINATE SYSTEMIC RACISM AND BIAS FROM ALL OF THEIR HIRING PRACTICES, AI TOOLS CAN OFFER VALUABLE, EFFICIENT, AND AFFORDABLE HELP.
Humans carry with them hundreds of flash judgments, learned biases, and unconscious preferences. These can lead people to make discriminatory choices, even when they think they’re being fair. As a result, companies have been turning to Artificial Intelligence (AI) systems for help with equalizing recruitment and hiring processes. Unfortunately, AI systems trained by biased humans or from a historically biased data set can turn out to be just as biased as the humans themselves.
Fortunately, there are many proven ways to strip systemic and unconscious bias out of AI systems for recruitment and hiring. This can offer companies the true benefits of AI talent screening tools. According to Knockri, that can include a 23% increase in diverse hires and a 62% decrease in hiring costs. So how can one ensure AI systems are unbiased? Here, five ways to remove AI biases will be discussed so that those interested can bring these “true neutral” perspectives on talent to their own AI training programs.
REMEMBER THE AI SYSTEM IS WHAT IT EATS AND FEED IT WISELY…
AI systems begin as truly neutral blank slates. What they eat – the data sets that are fed into them – determines what they become. Unfortunately, not all organizations have been as careful as they need to be about what their AI systems are using as input.
A common mistake is to feed the AI system the resumes of past hires. On one hand, this is a quick way to make sure the AI system learns what successful candidates look like on paper. On the other hand, if the company is turning to AI due to past history with bias, this data set is going to train the new AI system to be just as biased as the previous “pure human” screening process.
TEST ALGORITHMIC PROGRAMS ON PRACTICE GROUPS BEFORE GOING LIVE WITH THEM
Test. Test. Test! It can’t be said often enough that AI systems need to be tested before they’re rolled out to an entire organization or used as a 100% replacement for human staff members.
For an example of why, look to Amazon. In 2015, the company attempted to use AI systems to screen fresh applicants and potential new hires. However, the AI program was trained on past hire resumes, which skewed extremely male and pale. As a result, the AI program decided that female candidates were not likely to succeed, and as a short cut, dropped all known female applicants from the candidate pool.
Luckily, human program monitors noticed this in the testing and early adoption phase. They were able to demonstrate the program’s bias before it was rolled out to the full company. Their testing saved Amazon from a potential PR and hiring disaster nationwide, and ultimately the program was scrapped before it could negatively impact Amazon’s brand and bottom line.
USE AI SYSTEMS AS SCREENERS, NOT SCANNERS
Having human program monitors involved helps AI systems serve as true assistive screeners and not pure scanners. AI scanners can be fooled by keywords, certain facial ticks in videos, and differences between age generations. They’re less reliable than one might hope. However, AI systems used as early applicant screeners provide a more efficient way to gather baseline qualitative information from candidates.
As an example, consider initial interviews. AI programs can conduct simple first-round interviews with a large pool of candidates over email, analyze the written or recorded responses, and generate predictive capabilities charts for HR teams. This can help human program monitors focus on skills while keeping candidates themselves shielded and anonymous, eliminating visual and auditory biases. As a result, this type of screening can lower the time to fill metric for recruiters by as much as 68% according to research from the Medill School of Journalism, Media, and Integrated Marketing Relations at Chicago’s Northwestern University.
ELIMINATE METRICS AROUND VOCAL SPEED
Another small but impactful adjustment human teams can make to keep bias out of their AI systems is to be sure the programs aren’t using vocal speed as an assessment metric when screening recorded or filmed applicants. While this may seem like a good thing to measure, since it can indicate confidence and familiarity with the subject matter, it’s also an inadvertent gender discriminatory.
Researchers from HireVue, Inc found that men tend to talk faster than women in interview settings. Thus, when using vocal speed as a metric, men received more favorable scores than women, all other factors being equal. As a result, the company reset its algorithm to remove the vocalization speed metric from candidate assessments so as not to unduly penalize female applicants.
GET AN OUTSIDE OPINION REGULARLY
Finally, even in systems that are fed good quality training data, tested on practice populations, built to be screeners, and equalized on vocals, its often helpful to bring in an independent set of evaluators to have a look at the outcomes. This third party “second opinion” on the effectiveness of a trained AI system prior to full deployment and at periodic intervals can ensure that the organization is doing everything possible to achieve neutrality in its recruitment and hiring practices.
These outside evaluators can pick out cultural biases in the AI’s training unique to the organization. Plus, their assessment can be useful with regulators and internal stakeholders as another sign that the commitment to equalized opportunity in hiring is sincere both in the present day and over time. Building – and keeping – AI systems as hiring allies in the fight for diversity can help to definitely keep modern recruitment biases at bay.