AL
Published on

AI-driven Recruitment : An Ethical Evaluation

Authors

Imagine a robotic voice saying - “Congratulations! You have been selected for an interview for the position X at company Y.” Don’t need to imagine too hard, since this is gradually becoming a reality as a number of real-life recruiters are now choosing AI-driven job interviews, to assess candidates before an actual human recruiter contacts them. The odds are that if you’re applying to a job that attracts a lot of candidates, then you’ll be interviewed by an AI agent. An AI-driven video-interview might help in reducing the unconscious bias against certain candidates that humans may have depending upon its implementation.Some of the advances in AI with respect to sentimental analysis, facial expression recognition, tone analysis etc. have become very accurate. Knowing this may cause stress among candidates during the interviews, and every AI frame-work comes with its own ethical implications. AI also may not take into account the circumstances of a candidate like a human would. Through this paper, we aim to do a comparative study of the ethical issues posed by a human and an AI-driven recruitment process. We would examine ethical implications such as discrimination on the grounds of ethnicity, algorithmic decision-making, opacity and privacy. Finally, we would like to propose a system that can minimize these from both the data and algorithm design perspective.

In this day and age, to run businesses efficiently, we need to make the organizations dynamic. One of the long and drawn-out processes that almost every business has to go through is to recruit new people to work for them. This has led to the rise of AI recruitment technology. Recruiters can set up interviews with custom questions that require video responses. These videos are then processed for detailed analysis and review. The analysis is done employing various technologies like facial and audio recognition, sentiment analysis and natural language processing. Naturally, this whole process makes certain new ethical concerns to materialize, which might not have been encountered previously.

Artificial Intelligence is a powerful tool that stands to benefit our society and our quality of life considerably, but, the intended use of the technology and ethics need to be established before implementing it. Despite this fact, throughout the history of mankind, reckless technological expansion has taken place. “Just because we could doesn’t mean we should” has always been an afterthought. If we don’t consider ethics while building up technology, we’re running at a risk of incriminating innocent people and causing discrimination, which is – unethical. In this article, we aim to study the Artificial Intelligence-driven recruitment process and perform an ethical evaluation at each stage of the process where there is potential for any ethical concern to emerge. We would do a comparative study of the human-centred recruitment process v/s the AI-driven recruitment process. We would bring to light the reasons why the AI-driven process is on the rise against the human-driven process but also the ethical concerns arising in boththe processes. Next, we would take the reader through the AI-driven recruitment process as a participant. On the way, we would arrive at various points where certain ethical values would be challenged. At those specific points, we would attempt to show the possible ways to mitigate the causes of such concerns. Finally, we would conclude this paper with our overall thoughts about the whole ethical evaluation of this problem and provide mitigation strategies on how this system would make progress in future to be more ethically sound.

THE GOOD, BAD AND UGLY OF AI

It is often exclaimed that luck plays a huge role when it comes to clearing job interviews. The outcome of the interview depends upon what kind of questions you are asked. To top that, if you are lucky enough and you get asked the right set of questions, you are also subject to the interviewer’s unconscious bias. Sometimes it might work in favour of you (affinity bias) and other times it does not (horns effect) [1]. Also, there is a massive amount of inter-recruiter variability, making the system quite unfair. If we try to solve the problem by having a single interviewer, there is still interviewer fatigue to haunt you [2]. The last candidate is unlikely to get as much attention as the first candidate. This whole process of human recruitment is not only unfair to the candidates but also unfair to the company as they might miss out on better-fitting candidates for their company. A solution being proposed to eliminate the unfairness of such a system is using AI recruitment. It is claimed that since unconscious bias is an ingrained human trait, the AI system can eliminate it. An AI system would also mitigate interviewer fatigue and inter-recruiter variability. It also would provide more interviewers to be given a chance to be interviewed by taking off the load from human recruiters. But it’s not all sunshine and roses. Although AI systems claim to mitigate the unconscious human bias, in reality, such biases are perpetuated into the system. It does have human bias but at the same time, misses the essential human element. The AI system is not considerate to different circumstances of humans. It reduces humans to a mere set of descriptors, making it hard to judge their future potential. Also, the system is trained on a finite amount of data. It might have a disproportionate demographic distribution resulting in penalisation of the candidates that have different characteristics than the ones present in the dataset. Such a system would end up amplifying the biases. Let us say a dataset for hiring engineers contains a majority of male engineers, representative of the prevailing situation of the gender gap in STEM fields. The AI system would, in turn, hire lesser women than men, resulting in reinforcement of the existing social biases and further leads to an increase in the gender gap. The AI recruitment system comes with its pros and cons. But once the ethical implications of the system are identified, it becomes much easier to address themcompared to a human recruiter. In the following sections, we will go through the AI-recruitment process top to bottom, which involves automated resume parsing and video-interviewing.

AUTOMATED RESUME PARSER

The first step an interviewee has to take in a recruitment process is to apply for ajob position. This involves filling out details about previous work experience, ed-ucation, achievements, along with their personal details. The number of applica-tions that can be processed using an automated resume parser in a given amountof time is unmatched by a human recruiter. The two main AI techniques utilisedhere are text extraction followed by information extraction. Optical CharacterRecognition, Named Entity Recognition and Deep Natural Language Processingmodels are leveraged for this task. Let us analyse a case study to understand thisprocedure better and dig out the ethical implications associated with it.Amazon Inc. had been developing AI-based automated resume parsing soft-ware since2014[3]. Engineers working on this project claimed that they wantedto build a program which can take in a hundred resumes and “spit out the topfive candidates” that they can hire [3]. But by2015, the company realised thatthe software was biased against women [3]. The software did not rate the candi-dates for technical posts such as software developer roles in a gender-neutral way.Thisbiaswas propagated into the software by the data that was used for train-ing the AI model. The model had to learn this task by observing the patterns ofresumes submitted to Amazon by various applicants over ten years. As the dataspanned over such an extended period, the male dominance in the tech industrywas reflected in the data.As mentioned in the paper [4], not only did Amazon violate the EmploymentEquality Act,1998, Section8by discriminating prospective employees based ongender, but, they deployed the software without evaluating the performance ofthe model at an ethical level. Also, the applicants’ data was utilised in the soft-ware development phase. Whereas, the personal data collection should alwaysbe done with the consent of the providers, and they must be made aware of thetime period for which their data is to be retained. By not abiding by this rule,Amazon also violated the General Data Protection Regulation [5]. The reasonAmazon got away with its action is because of the Statute of Limitations,1957,Section72which worked in favour of the company as it was experimenting witha technology which is not100% behaviour predictable and was unaware of biasin the training data. The software employed at this stage of recruitment is not dis-closed and hence concerns revolving aroundopacitysurface. Transparency fromthe recruiting end is not desirable by the companies as this might lead to the evo-lution of techniques that can fool the resume parsing systems as the automatedresume parser merely penalises the resumes dissimilar to the bunch of resumes ithas already seen in the data.

VIDEO INTERVIEW

Let’s say you apply for a sales position at a multi-national corporationABCtoget a first-hand experience to extensively evaluate the ethical concerns that mightarise during the process.ABChas setup an AI-driven interview framework thatutilises the latest modelHireNet:a Hierarchical Attention Model for the Auto-matic Analysis of Asynchronous Video Job Interviews[6] in the long line of AImodels. You have tried your best to learn about this model and its evaluationcriteria. Unfortunately, your background being sales undercuts your ability tounderstand the nuances of this model. So, you decide to appear for this interviewwith your desultory knowledge of the AI that is going to interview you. Let’s seehow the AI-driven interview process works.

You would be assessed based on your audio and video feeds.

This is the first message that flashes up once you start recording your responsefor the video-interview. It immediately makes you conscious of your tone andbody language. This would have been the case with a human interviewer as well,as is natural. But according to the specifics of theHireNetmodel that you hadread before, it extracts and evaluates you based on audio features (such as pauses,silences, short utterances), visual features (such as head position and rotation,gaze direction) and verbal content (such as the number of words). This makesyou wonder what if someone with speech impediment were supposed to go tothrough this format of evaluation. The model might score the candidate poorlydue to the fact that it hasn’t seen a lot of such cases in the data. Similarly, peo-ple with facial deformities or medical conditions like cross-eyedness will be at aloss. Whereas, a human recruiter would have recognised these things to bringout the best in a candidate. This AI-framework tends to “fit” people into the datait has been trained on. It disregards the “outliers” which makes the modelnon-inclusive. This nature of the model runs even deeper when we look at it througha sociological lens. The model automatically assumes that the interviewee hasa stable internet connection as well as a good quality video camera. Pauses orsilences in audio feed and low-quality video feed might be a consequence of poornetwork connectivity. Since the model pays attention to these details, it shouldexplicitly handle these issues. Along with this, it should take into considerationthe fact that attributes like head position and gaze direction are difficult to trackif the camera used for recording the interview is of a relatively lower resolution.A solution to make the model robust is bypre-processingthe audio and visualfeeds before giving it as an input to the model. During this pre-processing step, 6the audio feed could be modified at appropriate locations to remove the undesir-able stoppages by monitoring the dynamics of the network connectivity while theinterviewee is recording the video. Similarly, various softwares are available thatcan either refine the low-resolution images or help extract the important featurespresent in the images. Other issues can be tackled by advanced attention lay-ers which can identify social signals considering contextual information. But themost crucial part of technological development must be acollaborationbetweenhuman sciences and technology research.

What is your proficiency level in French?

After some fairly straightforward questions arrives this particular question. For-tunately, you are proficient enough in French to apply for this job. In fact, thejob-requirement asked for the applicants to be fluent in French. But in your pre-interview research aboutHireNet, you noticed a peculiar thing – the model wastrained on7000+ French speakers. This fact clouds your trust in the model. Non-native French speakers would be disadvantaged. Different people, due to theirmother-tongue, would have different French accents. This means that certainaccented people would face unfair treatment by the AI model. Also, since thejob-applicant could be from anywhere around the world, it is not likely that thedata was collected for a global demographic. So there is a chance of ethnic dis-crimination in such a case.With the potential to automate the interviewing process and eliminate human-bias, this AI-framework is a win-win for efficiency for companies. But needlessto say, a good intent does not always generate good outcomes. The reason whythese companies have set up these frameworks is that they want to be more objec-tive in their decision-making. But the problem lies with the framework itself. Itbecomes a mere instrument of efficiency and not an instrument of equity. Afterall, the algorithms that build up these frameworks are built based on their dataand are as equitable as the data they are fed. But if these algorithms are builtusing biased data, the framework would only perpetuate the prejudice. So theissues arising with such data being used to build the AI-framework are thenon-inclusivenature of the framework and theperpetuation of human bias[7] thatmight already exist in the data. A possible mitigation for this problem could beto mask candidates’ faces and modulate their voices to remove certain types ofracial or ethnic biases that might creep into the model otherwise.

Would it be okay if we use the information gathered from your interview to further refine our model?

This is the type of question which could make anyone sceptical about the inter-view. If this interview is going to be a personality test in any manner, then somepersonal questions might need to be answered. And for sure, you wouldn’t wantyour personal information to be used in a context other than the interview evalu-ation. But even if we choose not to agree with this question, it does not give us aguarantee that our information would not be used otherwise.The obvious glaring concern with this question is –privacy. Since most of thealgorithms that build this framework use Machine Learning in some form or theother, it leads to the evolution of the AI driving this process. And as AI evolves,it enhances its own ability to process personal information in such a way that canencroach upon an individual’s privacy interests.Individuals do need protection for their privacy against any adverse effects thatmight happen due to the use of personal information in AI. But this needs to hap-pen without unduly restricting AI development. The unruly use of personal datacould violate an individual’s privacy, but it could even have cascading social andpolitical effects like disproportionately affecting minorities or the failed experi-ment by Amazon, for hiring new workforce, as discussed in the previous section.Rather than looking at this as a failure of the AI system, the difference betweenthe data issues specific to the AI and the broad spectrum of political and socialproblems should be understood and evaluated separately. And of course,consentof all the parties involved in any AI-driven process as well astransparencyfromthe AI-ownership would help to mitigate this issue to a certain extent [8].

How confident are you with your chances of getting hired?

This question takes you back to the hours spent on interview preparation. Youmust have spent the time to make yourself as hireable as possible. The researchwas done about the company and aboutHireNetmodel that the company uses tointerview candidates. But in case of an AI model deciding your hireability, someconcerns arise about its decision-making. TheHireNetdata-set of7000+ Frenchspeakers applying for a few hundred sales positions were classified as eitherhire-ableornon-hireable. The one major inference taken from this is that the modelis performing a binary classification. Essentially, the model is reducing complexand nuanced human-beings into a couple of descriptors. This happens to be amajor limitation of AI-driven interviews in general andHireNetin particular. Thedecision-making for hiring a candidate takes into account various factors. It de- 8pends on the history of the position, the pros and cons of the previous employeeand the expectations from the new candidate of the business itself. Another factorto consider is the teamwork aspect of the job, which is tough for a human to judgein a candidate, let alone an AI. And most of all the skill factor plays into decidingthe hireability of a candidate.For an AI model to judge a candidate based on all these factors requires com-plex algorithm-building. Since this work could be cutting-edge research, the al-gorithms might not be available publicly, which leads to anopaqueandnon-explainableAI model. There would be no way to understand how an AI decidesthe hireability. On the flip-side, if the algorithms are public knowledge, then thecandidates would get an incentive tofool the systeminstead of being honest intheir effort. Furthermore, the binary annotations ofhiredornot-hiredlimit theexpanse of the AI model. Sometimes, a more nuanced decision needs to be takenwith respect to a candidate for example – suggesting them to apply to a differ-ent department depending on their skills or to keep the candidate on the radarbecause of their potential for future growth. This could be achieved by havingfine-grained annotationsinstead of the binary ones currently used. So, thesecould be used to judge a candidate better based on different skill-sets.AI models for the future need to have complex decision-making ability alongwithtransparencyabout their algorithms to ensure that the ethical values arepreserved as much as possible [9].

CONCLUSION

With the increase in the use of AI for various tasks, automation of recruitment us-ing AI is inevitable. We have seen multiple ethical implications of such systems,and at the same time, we have also presented solutions to mitigate them. Incor-porating these solutions would minimise these issues, yet some of them mightbe unforeseen. Therefore, any AI framework must be ethically evaluated beforeits deployment. Furthermore, periodic ethical evaluation of such systems is es-sential, especially when the algorithm is updated. Another step in this directionis to make AI moreinterpretableandexplainable. It is being argued that thetrade-off between explainability and accuracy is a myth. The research focused onexplainable AI has also been receiving immense attention in the past few years.Despite the efforts, the AI-based recruitment system is far from perfect.An ideal AI-based recruitment system would enablefairnessand increase op-erational efficiencies. It is vital that till such a system comes in place, the agencyof decision-making should vest with humans. The role of AI in recruitment in the references9near future should be assistive, not disruptive. Humans must not be thrown outof the equation. For a crucial process like recruitment, the human element playsa huge role. Countering this, discussions around empathetic and compassionateAI have been making the rounds.The potential of new technology is fascinating. But we need to ensure wemeasure the impacts of such systems objectively. No matter what future holds forAI-driven recruitment systems, they must be scrutinised, audited and well testedbefore being brought into the real world.

REFERENCES

[1] Richard E Nisbett and Timothy D Wilson. The halo effect: evidence for un-conscious alteration of judgments.Journal of personality and social psychology,35(4):250,1977.

[2] Bika Nikoletta. The most common recruiting challenges and how to overcomethem.

[3] Jeffrey Dastin.Amazon scraps secret ai recruiting tool that showed biasagainst women, Oct2018.

[4] Akhil Alfons Kodiyan. An overview of ethical issues in using ai systems inhiring with a case study of amazon’s ai based hiring tool.2019.

[5] General data protection regulation.In: General Data Protection Regulation, Reg-ulation (EU)2016/679,2016.

[6] Léo Hemamou, Ghazi Felhi, Vincent Vandenbussche, Jean-Claude Martin, andChloé Clavel. Hirenet: A hierarchical attention model for the automatic analy-sis of asynchronous video job interviews. InProceedings of the AAAI Conferenceon Artificial Intelligence, volume33, pages573–581,2019.

[7] Manyika Silberg. Tackling bias in artificial intelligence (and in humans), June2019.

[8] Cameron F. Kerry. Protecting privacy in an ai-driven world, February2020.

[9] Trevor Breininger.5important factors to consider when making a hiring deci-sion,2020.

Co-authors: Swasti Shreya Mishra, Gurleen Kaur