Biases in Hiring and How to Eliminate Them?
Hiring the right people, with the right skill-set and doing it with a minimum cost has always been a challenge in every industry. Bringing in new people requires a lot of pre-planning, interviews, a back and forth and negotiations with the candidates, and finally hiring the right person that fits with the role, company mindset, and the work culture which in total can easily take up around a couple of months, wasting time and other resources of the firm. To resolve this, 60% of companies currently have started using AI for talent management and over 80% are planning to increase their use of AI in the next five years, according to a 2021 study reported by Human Resource Executive.
The use cases that AI brings in hiring are almost endless:
- Resume filtering: Select the best resumes based on the information provided.
- Social Media scanning: Assessing the candidate’s social media presence to look at their interests and skills.
- Skills checking: Selecting candidates based on their skillsets which match best with the role.
- Scheduling interviews: Interviews can be scheduled considering the availability of the interviewer and the interviewee.
- Video Screening: Assessing the candidate’s behavior and presentational skills during their video interviews.
- Chatbots: Solving the candidates’ questions and doubts anytime within minutes.
The challenges that come with automating everything
As AI systems require tons of data for training, if the data has been generated or sourced from biased methods or a real-world bias has been reproduced within the data, the AI that is trained on it also learns that bias and produces bad results. During non-AI-based hiring too, the conscious/unconscious biases of the employers can be projected onto the candidates based on their gender, ethnicity, religion, or on many minor things which ultimately leads to a potential loss of good candidates and a work culture that is very poorly diverse. But, if the same biases are unintentionally learned by a fairly neutral AI deployed for hiring, the discrimination will be scaled up to such a huge level that it might lead to corporate damage in terms of lawsuits, regulatory fines, shareholder and customer dissatisfaction, and reputational harm for the company, even if their moto was to just save some cost and time while hiring.
One report from Harvard Business School mentioned that a company while using AI for hiring trained a model such that it started rejecting candidates that were simply missing out on just one skill from the skills that were required for the job. In a resignation pro economy, such small mistakes might lead companies to miss out on and lose really good candidates that simply lacked one skill and this might also lead to a word-of-mouth effect that the hiring mechanism for this company is broken and candidates better apply somewhere else than here.
One such example of this was by one of the biggest companies in the eCommerce sector, Amazon. Including AI-based solutions in their hiring, Amazon tried automating the candidate selection process, but the problem occurred when the system started selecting just those applicants who fitted into a mold of characteristics that individuals already in such roles possessed. This led to a biased AI framework that preferred the applicant’s ethnicity and gender more than their actual talents while selection. Amazon after soon realizing this changed its hiring system back to the conventional one. Of course, Amazon did not purposely build such bias into their models, but the biases must have found their way in through the data that was collected from the past and it affected the entire process.
Another example, again involving Amazon, occurred in 2015, when they discovered that the female candidates were being discarded during the autonomous filtering process. The reason that they found was that as the model mostly trained on a male-centric data set, any resume with the word “women” in it (such as “women’s hockey team”) was being removed as they did not fit into the predefined ideal resumes that the model was trained on.
Can keeping such biases get you in trouble?
Absolutely, yes. Producing biased results while hiring, is as demeaning as it is unethical. The Federal Trade Commission (FTC) has also created certain guidelines regarding the use of AI while making decisions on people’s lives and if found guilty it might lead to a huge penalty or inquiry on the company implementing such biased systems.
The guidelines as provided by FTC:
– Be transparent.
– Explain your decision to the customer.
– Ensure that your decisions are fair.
– Ensure that your data and models are robust and empirically sound.
– Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination.
A full blog on these guidelines can be read over here.
How to eliminate such biases and save yourself from the extra trouble?
Here comes XAI-based techniques to the rescue. XAI or Explainable AI is a category of ML techniques and mathematical creative solutions that helps an ML model creator understand the decision-making process behind the model. It helps stakeholders dive deeper within the functioning of the model to peek through the curtains and look at how the model comes up with its outcomes. This helps the parties directly affected by the model’s output to understand and trust the model’s decision and helps create an AI environment that is accountable, responsible, and also fair.
XAI methods help eliminate the bias present within the model by giving out explanations on how each feature affects the model’s predictions and if certain features such as gender or age show more prominence during the decision-making process, then the model is proven to be biased. The model can then be retrained on a new data set that does not innately has bias within it and thus the model can completely be bias-free and trustable for its users.
Which XAI methods exactly?
XAI comes with a variety of methods nowadays but, while hiring, some of these can be more useful than others. These methods are given below:
- Local Surrogate Methods (LIME): Local surrogate methods, also known as LIME are a class of XAI methods that try to give explanations on a local scale to understand the behavior of the model using simpler interpretable models. In this method, another simpler model is used to understand the main model’s decision-making process on a very local scale and the same explanation is extrapolated to get the explanation for the entire model.
LIME tries to test the predictions of a complicated black box model by changing certain things or feature values in the data point or instance that is considered for explanation. It starts by generating a smaller data set with perturbed samples and passes that data through the black box model. Then it uses that new data set and its predictions (from the black box model) to train an interpretable model such as Linear models, which is weighted or parametrized by the proximity the new data has with the data point/instance taken into consideration.This newly trained model should have a good approximation of the black box ML model predictions locally, and it is not required to have a good global approximation because this does the job. The new model has a local fidelity with the data and can be used further to make explanations on how the black box model’s strategy for predictions work.
LIME has certain advantages while being used in understanding biases in AI-based hiring systems. The important advantage is that it is very simple to use and understand, and gives out explanations that are very human friendly. The explanations are quick and short so that anyone can have a quick grasp of the model’s functioning. This helps companies go through the reasons why a certain prediction was made and when needed it can be explained quite easily to anyone concerned. Along with that LIME also works on tabular, text as well as image data making it very suitable for this particular use case.
- Shapley values
Shapley value is another method to dive deeper into the model predictions and working. Shapley values are based on a concept of game theory where each contributor gets a fair share of reputation or compensation. Here, the contributors are considered to be the features of a data set, and each feature’s contribution is checked for a particular prediction.
Shapley values distribute the contribution of a predicted output over all the features whereas LIME does not guarantee that the prediction is fairly distributed among the features. Hence, it has been shown to give out complete explanations. Whenever a situation arrives where the explanations have to be accurate and provable such as in legal situations, the Shapley value might be the only legally compliant method, as it has a solid foundation with axioms – efficiency, symmetry, dummy, additivity – giving the explanations a reasonable backup and as it also distributes the effects fairly among the features.
Shapley values also allow for comparisons with a single data point or a subset of data set giving rise to contrastive explanations. Local methods such as LIME do not possess such contrastiveness within their explanations.
- Disparate Impact Analysis
Disparate Impact Analysis (DIA) is mostly used to eliminate biases from a system and increase fairness. Biases find their way into the system through various means such as while sourcing, dropping certain values, or while selection of the sample and such biases should be removed before they lead to any harm.
DIA typically compares aggregated values of unprivileged groups to a privileged group. For example, the proportion of the unprivileged group that receives a potentially harmful outcome is divided by the proportion of the privileged group that receives the same outcome—the resulting proportion is then used to determine whether the model is biased.
Mostly, the industry standard is a four-fifths rule that says that if the unprivileged group receives a positive outcome of less than 80% of their proportion of the privileged group, this is a disparate impact violation. However, this may vary based on the industry and the users that are getting affected by the decisions. Hence, hiring can tremendously benefit through such comparisons and remove the disparateness from the system that might occur through many features such as gender, or race.
- Counterfactual Explanations
Counterfactual explanations (CE) are a method to find out what inputs can be changed to produce a completely different result from an ML model. This has been shown to be quite helpful in the hiring as well as the credit lending industry. If a person was rejected by a model, she can enquire into what factors led to her rejection and with the help of CEs, she can understand what can be improved or changed so as to get an acceptance.
Sanity should obviously prevail while making such suggestions for a user. For instance, a person cannot change their age, gender, or ethnicity whenever required just to get an acceptance. Hence, in hiring, CEs can certainly tell the applicants of a lack of a skill or a particular educational qualification, such as an MBA, that was missing and this can actually help a candidate to re-evaluate themselves and work on such requirements if possible.
CEs are easier to use and understand with no dependency whatsoever on the data and model. They can also be used on systems without any machine learning and which only has certain inputs and some output. In hiring, they have a lot to offer for employees as well as to the employers.
Final words
Although AI has been showing a lot of promise within every sector, the problems that arise due to improper training are quite troublesome. While using AI in hiring, extra care has to be taken to implement models that are tested and proven to be bias-free. This blog talked about some of the problems that can occur when biased models are used while hiring, which leads to an unfair output that might damage its users in various ways.
We also looked at the XAI solutions that are possibly being able to find and eliminate the bias present within AI-based systems. Such possible solutions will help us accelerate the growth and adoption of AI to a higher pace and will also lead to a world that runs on automated systems, saving us a lot of time and cost, but also one which is fair, reliable, and runs on merit.
How does ANAI help with biases?
ANAI is an all-in-one ML-based solutions platform that manages the A – Z of AI, from data engineering to explainability. We offer a solution that focuses more on a no-code-based approach. ANAI’s AI engine easily outperforms and provides the best performance for any AI-based system.
ANAI has been built with explanability and responsibility in mind. We offer around 25 plus XAI models/techniques that can solve all your explanatory needs within a click of a button. They include techniques such as SHapley values, LIME, LOFO, etc. and many more, giving out exact explanations that are needed based on your industry, stakeholders and customers.
Removing Biases in Hiring, how ai can eliminate bias in hiring
Visit our website www.anai.io to know more or contact us at info@anai.io.
Connect with us for more details on the pricing offers and other solutions and recommendations.