The risks of using AI to improve diversity in corporations

Often hailed a silver-bullet solution for cutting the time and costs involved in the hiring process, artificial intelligence continues to dominate discussions in Executive Search. Whether it’s chatbot technology to enhance the candidate experience or targeted screening through social media, the potential that AI software offers HR in tackling the more mundane tasks is undeniable.

However, while increasingly-intelligent bots can sift through CVs to find suitable candidates in a quick and unbiased way, there is a danger in treating AI as a quick-fix tool for workplace diversity. After all, algorithms aren’t neutral by nature just because their consciousness is built on figures.

While over a quarter of Brits believe that artificial intelligence can bring a fairer hiring process, the very foundation that this technology is built upon demands input from engineers; it learns from the pre-existing, real-world data that it is fed and develops its own behaviours in accordance with the training it has received. In turn, heavy reliance on machine learning software in the hiring process comes with a number of risks.

The disparate impact of big data

 

Employers are often quick to assume that replacing human consciousness with machine learning algorithms is the key to removing any form of bias from the process. In fact, candidate-sourcing algorithms that rely on big data can and often do favour certain factors at the expense of others – be it age, gender or race.

For instance, in a recent US study, researchers explored the example of an employer who sought candidates most likely to commit to the role for the longest duration. If machine learning software assessed the historical data to source the most suitable candidates, it would likely discriminate against women on the basis that they tend to leave jobs sooner than their male candidates to make time for motherhood.

Similarly, employers eager to find candidates living in close proximity to the company premises may rely on AI algorithms to source prospective employees based on their distance from the office. In doing so, however, the employer risks discriminating against certain candidates of a particular ethnicity, age or social background since the profile of certain neighbourhoods can be distinctly not diverse in composition. Restricting your search to a certain location will inevitably exclude a wide variety of individuals from your results.

AI needs diversity, not the other way around

 

Put simply, an AI algorithm is only as good as the data it works with. Instead of using their own company culture as a model for machine learning algorithms to base their search on, employers should actively seek out data that is inclusive by nature. To feed an AI data sets that are embedded with patterns that exhibit pre-existing bias and expect the results to be fair and representative of a diverse population is unrealistic.

If AI is to help in building a diverse workforce within corporations, AI solutions must be modified to correct for bias that exists within data sets and algorithms. An example of this might be to introduce specific constraints that encourage such software solutions to purposely select as many individuals from each ethnicity, age and gender – however, this solution may be counterproductive when taken to extreme.

If employers want to improve diversity, it’s clear that AI-powered technology cannot bear the responsibility alone. With further research and development, machine learning could one day help to eliminate bias in employing: until then, relying too heavily on big data and AI could only exacerbate the problem.