Header menu link for other important links
Acoustic Scene Classification in Hearing aid using Deep Learning
V.S. Vivek, , P. Madhanmohan
Published in Institute of Electrical and Electronics Engineers Inc.
Pages: 695 - 699
Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like 'music,' 'noise,' 'speech with noise,' 'silence,' and 'clean speech' for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid. © 2020 IEEE.