Home
Scholarly Works
Reducing the incidence of biased algorithmic...
Journal article

Reducing the incidence of biased algorithmic decisions through feature importance transparency: an empirical study

Abstract

As firms move towards data-driven decision-making using algorithmic systems, concerns are raised regarding the lack of transparency of these systems which could have ramifications related to users’ trust and the potential for provoking discriminatory decisions. Although previous research has developed methods to improve algorithmic transparency, little empirical evidence exists regarding the extent of the effectiveness of these approaches. Drawing upon Rest’s theory of ethical decision-making and the literature on algorithmic transparency and bias, we investigate the effectiveness of feature importance (FI), a common transparency-enhancing approach, which illustrates the nature and the weights of the features utilised by an algorithm. Through an online experiment employing a fictitious tool that provided recommendations for selecting employees for a promotion-related training programme, we find that FI is effective when biased recommendations include direct discrimination (i.e. when individuals are treated less favourably on protected grounds such as gender); but is of little assistance when discrimination is indirect (i.e. when a criterion or practice that is apparently neutral, disadvantages a group of individuals who are of a protected class). Additionally, we propose a new transparency approach, using aggregated demographic information, to accompany FI in indirect discrimination circumstances and report the results of testing its effects.

Authors

Ebrahimi S; Abdelhalim E; Hassanein K; Head M

Journal

European Journal of Information Systems, Vol. 34, No. 4, pp. 636–664

Publisher

Taylor & Francis

Publication Date

July 4, 2025

DOI

10.1080/0960085x.2024.2395531

ISSN

0960-085X

Contact the Experts team