Please use this identifier to cite or link to this item: https://repository.usc.edu.co/handle/20.500.12421/2744
Title: Fusion of auditory inspired amplitude modulation spectrum and cepstral features for whispered and normal speech speaker verification
Authors: Sarria Paja, Milton
Falk, Tiago H.
Keywords: Whispered speech;Speaker verification;Modulation spectrum;Mutual information;System fusion
Issue Date: 25-Mar-2017
Publisher: Academic Press
Abstract: Whispered speech is a natural speaking style that despite its reduced perceptibility, still contains relevant information regarding the intended message (i.e., intelligibility), as well as the speaker identity and gender. Given the acoustic differences between whispered and normally-phonated speech, however, speech applications trained on the latter but tested with the former exhibit unacceptable performance levels. Within an automated speaker verification task, previous research has shown that i) conventional features (e.g., mel-frequency cepstral coefficients, MFCCs) do not convey sufficient speaker discrimination cues across the two vocal efforts, and ii) multi-condition training, while improving the performance for whispered speech, tends to deteriorate the performance for normal speech. In this paper, we aim to tackle both shortcomings by proposing three innovative features, which when fused at the score level, are shown to result in reliable results for both normal and whispered speech. Overall, relative improvements of 66% and 63% are obtained for whispered and normal speech, respectively, over a baseline system based on MFCCs and multi-condition training.
URI: https://repository.usc.edu.co/handle/20.500.12421/2744
ISSN: 08852308
Appears in Collections:Artículos Científicos

Files in This Item:
File Description SizeFormat 
Fusion of auditory inspired amplitude modulation spectrum and cepstral.jpg139,97 kBJPEGView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.