Artificial intelligence (AI) technology appears to have almost no limits on what aspects of our lives it can touch.
While AI already is having a major impact on the business world, it is poised to take center stage in the hiring process.
AI interviews in which a job candidate talks with an artificial intelligence interviewer started to emerge a few years ago but have yet to become commonplace in the corporate sector.
But with ChatGPT, Google’s Bard and other AI chatbots continuing to shake up the recruiting world, it’s likely only a matter of time before more companies – perhaps even the vast majority – incorporate AI interviews into their hiring practices.
According to a recent survey, 43% of companies have or plan to adopt AI interviews by 2024. Is that a good thing? Previous attempts to use AI in the interview process have interjected programmed bias into the process. But AI experts insist the technology has improved and will continue to do so. There is no denying the efficiencies and cost savings AI can drive in business processes.
The above statistic, however, may not be the most surprising one in the recent study.
Of the 43% who say their company will be using AI interviews in 2024, 15% say the AI will be used to make decisions on candidates without any human input, according to the survey of more than 1,000 corporate human resources and recruitment specialists recently conducted by ResumeBuilder.com.
“It’s no surprise that companies are investing in AI interviews as they continue to try to streamline the interview process,” comments ResumeBuilder Chief Career Advisor Stacie Haller. “I personally don’t believe that human interaction can ever be replaced, but if companies believe this will help them screen candidates effectively, we will see this practice continue to grow.”
The need for AI assistance in job candidate screening and hiring is greater than ever. With platforms such as LinkedIn and Indeed making applying for jobs easier than ever and remote work options making a broader pool of applicants from a wider geographical area interested in various job openings, human resources officials are increasingly flooded with applicants for many openings. Going through those resumes can be an enormous task.
More than half of the human resources professionals surveyed said they believe AI will eventually replace human hiring managers.
But there is a cautionary tale here, and Haller acknowledges that.
“I’m certainly glad to see that 85% of respondents acknowledge that human input is needed in the hiring process,” Haller says. “If AI eliminates solid candidates, companies need a backup plan to verify the effectiveness of its screening abilities.”
With the rise of AI – and the current social climate – the debate about bias in business practices, and hiring in particular, is likely to intensify.
Computers are devoid of the emotions that often feed bias in humans, and they have been, in some cases, used to curtail bias.
But humans can feed biased thinking into machines, and computers can learn from humans’ biased behavior and replicate it on an even larger scale.
“Artificial intelligence is the next frontier. We’re in the middle of a revolution, and we’re trying to figure out how to take bias out,” says Mary Murphy, an Indiana University professor of psychology and brain sciences. “It’s a Herculean task.”
Elevate Ventures CEO Christopher Day, who founded the AI-driven marketing firm DemandJump, thinks computers – and computer programs – can be rid of bias.
“You have to make sure you don’t insert data into algorithms that will skew the outcome,” Day stresses. “You have to be very intentional with what you feed those algorithms. Clean data sources are the key.”
One example of AI gone wrong is Amazon’s failed attempt to use the technology to screen job candidates. The company scrapped that initiative in 2016 – after nearly three years of development – when it discovered the AI-driven screener was discriminating against women.
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of all-women’s colleges.
“I haven’t seen much evidence that bias can be taken out of algorithms,” Murphy says. “The problem is, all algorithms need data … and they learn associations and make suggestions. All of these data sets at scale have our biases built in.”
Clearly, there’s still much debate and disagreement on the subject, even among experts.
Some researchers and tech experts think it’s easier to program bias out of a machine than out of a human mind. That emerging conclusion of research-based findings could lead to AI-enabled decision-making systems being less subject to bias and better able to promote equality.

