Stories about artificial intelligence (AI) stealing our jobs and robots going rogue have been in our collective consciousness for years. Elon Musk has also sounded the alarm bells’, calling AI the “biggest risk we face as a civilization”. While he may know a few things I don’t, I can’t say that I agree. Always one to embrace technology, I think AI has great potential to be used by businesses in the HR space, such as to make hiring practices more efficient and more fair.
Online dating sites such as OkCupid have been using AI for over a decade to help people find their love match, so why not apply that success to employers looking for the right candidate.
Picture a hiring manager faced with a thousand job applications to sort through – AI can help Human Resources sift through resumes and identify suitable candidates. AI assisted applicant screening also has great potential to reduce the risk that candidates will be discounted because of implicit bias that human hiring managers may unconsciously hold. For example, studies have shown that those with anglicized names get more job interviews than those whose names suggest they are members of a minority group.
AI can act as a bias-free screening tool. AI hiring assistants do not know how old candidates are, what they look like or what sex they are. This levels the playing field, ensures diversity of candidates and helps businesses truly find the best talent.
Some businesses are doing away with resumes entirely, amid suggestions that they reveal too much information that could trigger potential bias – name, gender, schooling – and that these attributes have very little to do with whether or not the candidate will be a good fit.
One new technology works with employers to film candidates answering questions. AI then measures things like micro-muscle movements in the person’s face to make judgments about their communication skills, level of enthusiasm etc. This practice shortlists candidates based on applicable skills in a way that is free of human bias.
Another iteration of AI hiring technology, currently being used by some large organizations, uses OkCupid-like questions to find candidates jobs that would be a good match for them. Searching the entire opening pool, candidates are directed to jobs that they would not have necessarily applied for, but that may be a good fit. Similar technology is being used to ensure that current employees are in positions that fit well with their skills.
Giving all these jobs to robots and algorithms raises interesting ethical questions. Is it an invasion of a candidate’s privacy to measure the quiver of their lip during a video interview? Plus, any AI system is only as good as the data inputted into the system to be assessed. How do we assure there are no baked in biases in the data or the way the data is prioritized, that the human directed data is not somehow tainted with bias, leading to further system discrimination?
What if something goes wrong, who will be held responsible? Computers do not act with intention and they cannot not be punished. How will the law navigate these questions of liability? Corporations for example are legally recognized as their own entities. Could the law evolve in the same way with respect to AI?
The expansion of AI in the workplace will continue to raise big questions, and likely trigger the need for policy changes as well as new government regulation. Either way, HR is not being outsourced anytime soon. Rather, technology like AI will serve as another tool to deal with the high volume of work facing every HR department.
Latest posts by Spring Law (see all)
- The federal government’s mandatory vaccination policy - October 13, 2021
- Are “mutual separations” a real thing? - September 15, 2021
- Private sector privacy legislation – Is it coming to Ontario? - August 11, 2021