Rapid digital innovation is transforming how people look for jobs and how companies recruit. As a result, the competition for jobs is being transformed. Digital tools give job seekers new ways of describing themselves and employers new sources of data on candidates, in real-time and at low cost. But are there hidden perils?
We spend a lot of time online. Over 90% of the population in many developed countries has access to the internet. In the USA, 18 to 35-year-olds, spend the equivalent of a full working day a week on social media alone. When it comes to looking for a job, Adecco estimates that candidates spend, on average, three-quarters of their job search time online, on websites and social media (platforms include Linkedin, Facebook jobs, Google Careers, Glassdoor, amongst many others).
If companies want to attract talent, increasingly, they need to recruit using digital tools. Many companies have digitalized vacancy advertisements, make use of job search sites, social media, and specialized professional platforms. But they have also automated other parts of the recruitment process. As part of the growing HR Tech market, employers have turned to AI and machine learning software to review and rank CVs in the quest for greater accuracy, consistency and to reduce processing costs. AI-powered psychometric testing and video interview analysis are further manifestations of the digitalization of the labour market. Some companies see little alternative to digitalising hiring: Google has been noted to receive several million applications per year and Unilever 250,000 applications for 800 new graduate positions. Innovations in digital hiring also promise to make the process fairer by making it more objective, accessible, and transparent.
The rise of the digital labour market also presents challenges that need to be addressed. A central issue relates to the effectiveness of these new tools and their implications for fairness in recruitment. Claims that AI-enabled tools are more effective than humans in discriminating between good and bad candidates are confronted with arguments that point to various sources of bias and lack of transparency.
Facilitating human biases
Using analysis of social media activity employers gain access to sensitive personal data including, old photographs, evidence of political views, relationship status, or sexual preferences. Demographic information found online, such as sex, age, primary language, and religion, can also create hiring biases. The same information can be viewed very differently depending on where it is submitted: adding a photograph to the CV or requiring one can be perceived as improper or be against the law in some countries, whereas not including a photograph in your professional social media profile can be portrayed as naïve and unprofessional by employers.
Replicating human biases
Existing biases may be replicated, in spite of a veneer of objectivity, as algorithms learn human biases. A source of bias is ‘confirmation bias’: as algorithms are fed with data on current employees’ characteristics and performance to predict an applicants’ performance. If a company tends to promote males as a result of gender biases, the algorithm may identify being a male as a marker for achievement. More generally, recruitment may become more homogeneous and the selection of ‘non-traditional’ profiles is less likely when it is based on replicating the existing profiles of high performers. Concerns about bias are compounded by the rather homogeneous profile of the algo-makers, as much of the recruitment software is created by males – and in specific geographical regions, especially the United States.
Flipping the bias
‘Flipping the bias’, is a different form of bias that may favour minorities: if biases against a minority in recruitment lead to recruitment only of ‘star applicants’ with great potential from those minorities (as they need to be so much better than other applicants to overcome the discrimination that they face) the algorithm will come to associate the factor discriminated against as a marker for high performance.
Lack of transparency
Despite claims to the contrary, the digital recruitment process also lacks transparency: how performance in a digital game, that includes facial expressions or speech speed are judged and weighted by algorithms to predict performance may not be easy for candidates or even recruiters to understand. Employers may lose sight of how exactly their recruitment is operating.
Struggles with acceptability
The use of AI for recruitment is, at the moment at least, far from universally accepted amongst job seekers and recruiters. Human decisions are often seen as fairer than those taken by algorithms in recruitment because algorithms are seen to lack human judgment and struggle with qualities that are difficult to quantify. Moreover, most work on the effectiveness of AI recruitment tools relates to their predictive capacity regarding personality or job interview performance; findings on their validity to predict actual job performance are scarce.
What is being done about it?
Recruiters and developers can make decisions on whether and how to use new technological possibilities in more acceptable ways. Vendors are responding to these sorts of concerns by evaluating their models, and providing software that removes personal information (dates, photos, names, clues of sexual orientation or religion) from applicants’ profiles before human review. Confirmation biases such as those that led Amazon to scrap an AI recruiting tool that discriminated against women for software development jobs, are being addressed through ‘weighting’ of less represented populations to make up for under-representation. Some vendors specifically present the reduction of biases as a selling point in their public information, although it is still difficult to explain how algorithms make particular decisions or to evaluate whether they offer a fairer basis for recruitment.
There are, thus, important questions of both fairness and efficiency in the use of digital tools for recruitment.