2022-03-282022-03-282016https://link.springer.com/chapter/10.1007/978-3-319-43958-7_42978-3-319-43958-7https://hdl.handle.net/10669/86306Part of the Lecture Notes in Computer Science book series (LNCS, volume 9811).Automatic speech recognition systems (ASR) suffer from performance degradation under noisy conditions. Recent work, using deep neural networks to denoise spectral input features for robust ASR, have proved to be successful. In particular, Long Short-Term Memory (LSTM) autoencoders have outperformed other state of the art denoising systems when applied to the mfcc’s of a speech signal. In this paper we also consider denoising LSTM autoencoders (DLSTMA), but instead use three different DLSTMAs and apply each to the mfcc’s, fundamental frequency, and energy features, respectively. Results are given using several kinds of additive noise at different intensity levels, and show how this collection of DLSTMA’s improves the performance of the ASR in comparison with the LSTM autoencoder.engLong short-term memory (LSTM)Deep learningDenoising autoencodersImproving automatic speech recognition containing additive noise using deep denoising autoencoders of lstm networkscomunicación de congreso10.1007/978-3-319-43958-7_42