Artificial Intelligence (AI) has shown great potential at a variety of tasks jobs. How about hiring humans?
AI Continues to Grow
Artificial Intelligence has begun playing a larger role in hiring processes, viewing an estimated 75% of all applicant resumés. Sold as a method to eliminate bias in the job hiring process and to reliably pick out the best candidate, it has begun to seem more and more likely as of late this goal is still relegated to the future. A whopping 88% of hirers who use this practice say they know that the AI filters out qualified applicants, and strong evidence suggests this filtering frequently violates hiring discrimation laws, going so far as to predict higher success among applicants named “Thomas.” So how can employers, if at all, utilize AI in hiring, and how does that change the process for job applicants? Before answering these questions however, we first must understand why this shift is occurring.
The advantages of AI in hiring stand pronounced and persuading. Possibly the most common, and understandable, reason is simply the time-saving nature, especially when dealing with a high-volume of applicants. Given that many positions have a majority unqualified applicant pool in the eyes of the employer, sifting through applicants could save time–a lot of it. While every position is unique, it can take well over a dozen hours to get through the initial review. Clearly this marks a massive draw on company resources, and provides a strong incentive to streamline the process.
In addition, the same source suggests the AI-hired employees have less turnover, greater productivity, and greater revenue per employee, and not a small amount either. With turnover rates decreased by over a third, this also substantially effects long-term productivity as the hiring and on-boarding process can take quite a bit of time and resources. Such gains are nothing to gloss over when deciding how to fill an open position, and another massive advantage from reducing human biases.
While the mechanical nature of AI certainly rules out the possibility of human prejudice, AI depends on patterns and trends to make predictions. So, for example, while humans are perfectly capable of understanding that having the name “Thomas” is not an indicator of job success, the makers of the AI would need to ensure the program cannot begin to use that as a factor.
The Problems to Overcome
Unfortunately, AI in this environment has other limitations as well, both in practice and legal troubles. As with other concepts which use machine learning, the program would require training material provided by humans. And though it seems like this could be done by the third-party firms which sell the use of their AI, different jobs have different priorities for what they would like, so in reality instead of training one or a few different AI’s for different jobs, it would likely require dozens at the very least to account for the vast differences in skills required for a specific job. The fact that one resumé could be perfectly suited for one position but an ill-fit for another significantly complicates the problem.
Along these lines, if humans train the AI poorly then it could easily take on the human biases which we sought to eliminate in the first place. The American Bar Association finds it likely AI hiring will become subject to both Title VII of the Civil Rights Act, Age Discrimination in Employment Act, and Americans with Disabilities Act. Applying current case law to AI from those acts would cause employers to justify the tests an AI performs in making decisions relevant to the job, a lofty task for those unfamiliar with how AI functions. As discussed above regarding ensuring only relevant information is evaluated, any oversight then in this area could source a lawsuit.
The US Department of Justice and Equal Employment Opportunity Commission in May of this year warned employers about the dangers of utilizing AI as it could violate federal discrimination law, particularly against disabled people, indicating federal authorities are looking out for this issue.
Of note, even though most companies would be utilizing AI provided by another corporation instead of their own system, the hiring company would likely be the one who faces a penalty since they chose to use it. Given the difficulty it would take to explain why another company’s system is not violating discrimination law, this should be considered a vital piece of evidence against the use of AI for hiring.
For instance, consider an AI evaluating an individual’s education. Many people include the year of the degree, which the AI would then have to disregard for possibility of age discrimination, as someone who graduated long ago must be at least that age. Such small possible oversights could inadvertently lead to large lawsuits, especially if they go unnoticed for too long. Justifying the actions and decisions of a job hirer is already challenging, however since AI can’t speak or communicate, the hirer will have an even more difficult time defending the practice.
One idea to remedy this while not having humans read every resumé is to sample a portion of the applications, identifying which applicants they would like the AI to prefer, and comparing how close it is to which applications the AI marked for further review. If they do not closely align, that would be a sign to re-evaluate their hiring process.
However, as happens with statistics, the general rule does not always apply to an individual; thus while it is probable that this reduces the chance against bias from the machine, it cannot completely solve the problem. For current times the only viable strategy appears to be keeping human eyes on resumés. That said, the usage of AI to assist in hiring the best candidate still remains viable, in conjunction with humans.
Similar to the sampling idea from above to test the AI’s accuracy, a human reviewer goes through candidates and selects their top applicants. Then, the hirer compares it to the list of choices from the AI, and anyone selected by the human but not the AI gets a secondary review before being removed from consideration, so every person not selected did, at some point, have a person look over their application. While this is more labor intensive in the initial reviewing process, it may cut down the final applicant count that receives interviews and the final candidate count. Further, getting a glimpse at who the AI prefers for the position gives a secondary, fairly objective standard to gauge human choices against.
With all that said, basic AI is still being used by many companies to eliminate the majority of applicants, so it is important for both employers and job seekers alike to know how to navigate this environment. Clearly, the most important aspect is getting by the AI so a human can then review the application, so the best piece of advice is to include words related to the job as much as possible. A human can easily see how skills transfer from relatively similar positions, but a machine cannot, so whatever keywords appear in the job descriptions should, while still being honest, appear within the resumé. For employers, this also means a smart applicant who normally may not get through the initial screen may be able to with clever word choice in their application.
All said, the main takeaways must be better, smarter development of AI for job hiring. The technology, while promising for the future, is not an adequate replacement at current time for putting human eyes on applications. In addition to legal risks, the complexity of job hiring dealing with incredible nuance means AI needs more time to catch up to humans in its ability to safely eliminate a candidate. Instead of rushing out a not fully-developed product, it would be best to simply try and integrate it in as a secondary tool as opposed to the final decider.