Algorithms are analysing people’s expressions and tone of voice to check for traits such as “confidence” and “happiness” during video interviews.
The robotic video assessment software is then used to hire candidates — customer service operators and assistant vice presidents alike — though the process comes with its own set of problems.
Axis Bank used algorithm-based video interviews — along with aptitude tests — to hire around 2,000 customer service officers from a pool of more than 40,000 applicants this year, said Rajkamal Vempati, HR head of the private sector bank, adding it could standardise and scale up the process of hiring.
HR managers only gave offer letters, he said.
Nirmal Singh, CEO of Wheebox, a division of PeopleStrong which carried out the hiring, said it trained the face-indexing software — sourced from Microsoft — using around 50,000 candidates who had applied to Axis Bank in 2017. The software picked up emotional states such as “nervousness” and “happiness” based on eye movements, expressions and tone of voice and marked the candidates, Singh said. Scores from candidates who were shortlisted were used to come up with the “cutoff ” for these traits.
Insurance provider Bajaj Allianz has hired more than 1,600 people, including underwriters and assistant vice presidents, with the help of robotic video assessments that analysed behaviour, said Vikramjeet Singh, chief HR officer, adding it could help reduce human bias.
CONCERNS OVER SOFTWARE’S BIASES
Talview, a Palo Alto-headquartered company with operations in Singapore and the United States, provided the assessment for the insurer.
The software, sourced from Microsoft and IBM, can analyse states such as “anger” and “happiness” from expressions, “confidence” from voice tone and traits like “ability to work in a team” and “decisiveness” from text analysis, according to Rajeev Menon, chief product officer, Talview.
Candidates may be able to beat questionnaires by giving expected answers to questions like “Can you work in a team?”, but video assessments pick up on subtleties in expression and vocabulary, and cannot be gamed, Menon said.
Be that as it may, Amazon.com scrapped its artificial intelligence-based recruiting system after it found the AI system biased against women, according to an October 2018 report by Reuters. The AI system was drawing on data from the past, where more men had made it into the company than women.
“If you can fool a human, you can fool a computer,” said Sunil Abraham, executive director of Centre for Internet and Society.
Recruitment algorithms could “homogenise the emotional economy” by forcing people to act a certain way, he said.
Since the software is based on expressions and tone of voice, it could disadvantage less expressive people, like those who are autistic, said Wheebox’s Singh.
Facial recognition by companies such as IBM, Microsoft and Amazon got the gender of a dark-skinned woman wrong one out of three times (20-35% error rate), a 2018 study by MIT researcher Joy Buolamwini found. For white males, the error was 0.8%.
This article was published on Economic Times