Multi-channel Convolutional Neural Networks for Automatic Detection of Speech Deficits in Cochlear Implant Users

Abstract

This paper proposes a methodology for automatic detection of speech disorders in Cochlear Implant users by implementing a multi-channel Convolutional Neural Network. The model is fed with a 2-channel input which consists of two spectrograms computed from the speech signals using Mel-scaled and Gammatone filter banks. Speech recordings of 107 cochlear implant users (aged between 18 and 89 years old) and 94 healthy controls (aged between 20 and 64 years old) are considered for the tests. According to the results, using 2-channel spectrograms improves the performance of the classifier for automatic detection of speech impairments in Cochlear Implant users.

Publication
Iberoamerican Congress on Pattern Recognition (CIARP)