InterSpeech 2021

Pairing Weak with Strong: Twin Models for Defending against Adversarial Attack on Speaker Verification
(longer introduction)

Zhiyuan Peng (CUHK, China), Xu Li (CUHK, China), Tan Lee (CUHK, China)
Vulnerability of speaker verification (SV) systems under adversarial attack receives wide attention recently. Simple and effective countermeasures against such attack are yet to be developed. This paper formulates the task of adversarial defense as a problem of attack detection. The detection is made possible with the verification scores from a pair of purposely selected SV models. The twin-model design comprises a fragile model paired up with a relatively robust one. The two models show prominent score inconsistency under adversarial attack. To detect the score inconsistency, a simple one-class classifier is adopted. The classifier is trained with normal speech samples, which not only bypasses the need of crafting adversarial samples but also prevents itself from over-fitting to the crafted samples, and hence makes the detection robust to unseen attacks. Compared to single-model systems, the proposed system shows consistent and significant performance improvement against different attack strategies. The false acceptance rates (FARs) are reduced from over 63.54% to 2.26% under the strongest attack. Our approach has practical benefits, e.g., no need to modify a well-deployed SV model even it is well-known and can be fully accessed by the adversary. Moreover, it can be combined with existing single-model countermeasures for even stronger defenses.