Abstract Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system’s candidate recommendations on humans’ hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context.
Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring
Electronic Markets ; 32 , 4
2022
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
“I think it, therefore it’s true”: Effects of self-perceived objectivity on hiring discrimination
Online Contents | 2007
|British Library Conference Proceedings | 2006
|People of Color in the Academy: Patterns of Discrimination in Faculty Hiring and Retention
British Library Conference Proceedings | 2000
|American Airlines' Pilot Hiring Criteria
SAE Technical Papers | 1989
|Europäisches Patentamt | 2016
|