Fair Voice Biometrics: Impact of Demographic Imbalance on Group Fairness in Speaker Recognition
|Gianni Fenu (Università di Cagliari, Italy), Mirko Marras (EPFL, Switzerland), Giacomo Medda (Università di Cagliari, Italy), Giacomo Meloni (Università di Cagliari, Italy)|
Speaker recognition systems are playing a key role in modern online applications. Though the susceptibility of these systems to discrimination according to group fairness metrics has been recently studied, their assessment has been mainly focused on the difference in equal error rate across groups, not accounting for other fairness criteria important in anti-discrimination policies, defined for demographic groups characterized by sensitive attributes. In this paper, we therefore study how existing group fairness metrics relate with the balancing settings of the training data set in speaker recognition. We conduct this analysis by operationalizing several definitions of fairness and monitoring them under varied data balancing settings. Experiments performed on three deep neural architectures, evaluated on a data set including gender/age-based groups, show that balancing group representation positively impacts on fairness and that the friction across security, usability, and fairness depends on the fairness metric and the recognition threshold.