|Computer Science Institute, Argentina
Most modern speaker verification systems produce uncalibrated scores at their output. That is, while these scores contain valuable information to separate the same-speaker from the different-speaker trials, they cannot be interpreted in absolute terms, only relative to their distribution. A calibration stage is usually applied to the output of these systems to convert them into useful absolute measures that can be interpreted and reliably thresholded to make decisions. In this keynote, we will review the definition of calibration, present ways to measure it, discuss when and why we should care about it, and show different methods that can be used to fix calibration when necessary. Luciana Ferrer is a researcher at the Computer Science Institute (ICC, for its acronym in Spanish), affiliated to the University of Buenos Aires (UBA) and the National Scientific and Technical Research Council (CONICET), Argentina. Luciana received her Ph.D. degree in Electronic Engineering from Stanford University, USA, in 2009, and her Electronic Engineering degree from the University of Buenos Aires, Argentina, in 2001. Her primary research focus is machine learning applied to speech processing tasks.