Classification of nonverbal human produced audio events: A pilot study
Bouserhal, Rachel E.
Sarria Paja, Milton
MetadatosMostrar el registro completo del ítem
The accurate classiﬁcation of nonverbal human producedaudio events opens the door to numerous applications beyondhealth monitoring. Voluntary events, such as tongue clickingand teeth chattering, may lead to a novel way of silent interfacecommand. Involuntary events, such as coughing and clearingthe throat, may advance the current state-of-the-art in hearinghealth research. The challenge of such applications is the bal-ance between the processing capabilities of a small intra-auraldevice and the accuracy of classiﬁcation. In this pilot study,10 nonverbal audio events are captured inside the ear canalblocked by an intra-aural device. The performance of three clas-siﬁers is investigated: Gaussian Mixture Model (GMM), Sup-port Vector Machine and Multi-Layer Perceptron. Each classi-ﬁer is trained using three different feature vector structures con-structed using the mel-frequency cepstral (MFCC) coefﬁcientsand their derivatives. Fusion of the MFCCs with the auditory-inspired amplitude modulation features (AAMF) is also investi-gated. Classiﬁcation is compared between binaural and monau-ral training sets as well as for noisy and clean conditions. Thehighest accuracy is achieved at 75.45% using the GMM classi-ﬁer with the binaural MFCC+AAMF clean training set. Accu-racy of 73.47% is achieved by training and testing the classiﬁerwith the binaural clean and noisy dataset.