Cross-corpus training strategy for speech emotion recognition using self-supervised representations
Financiación H2020 / H2020 Funds
Resumen: Speech Emotion Recognition (SER) plays a crucial role in applications involving human-machine interaction. However, the scarcity of suitable emotional speech datasets presents a major challenge for accurate SER systems. Deep Neural Network (DNN)-based solutions currently in use require substantial labelled data for successful training. Previous studies have proposed strategies to expand the training set in this framework by leveraging available emotion speech corpora. This paper assesses the impact of a cross-corpus training extension for a SER system using self-supervised (SS) representations, namely HuBERT and WavLM. The feasibility of training systems with just a few minutes of in-domain audio is also analyzed. The experimental results demonstrate that augmenting the training set with EmoDB (German), RAVDESS, and CREMA-D (English) datasets leads to improved SER accuracy on the IEMOCAP dataset. By combining a cross-corpus training extension and SS representations, state-of-the-art performance is achieved. These findings suggest that the cross-corpus strategy effectively addresses the scarcity of labelled data and enhances the performance of SER systems.
Idioma: Inglés
DOI: 10.3390/app13169062
Año: 2023
Publicado en: Applied Sciences (Switzerland) 13, 16 (2023), 9062 [15 pp]
ISSN: 2076-3417

Financiación: info:eu-repo/grantAgreement/ES/AEI/PDC2021-120846-C41
Financiación: info:eu-repo/grantAgreement/ES/AEI/PID2021-126061OB-C44
Financiación: info:eu-repo/grantAgreement/ES/DGA/T36-20R
Financiación: info:eu-repo/grantAgreement/EC/H2020/101007666/EU/Exchanges for SPEech ReseArch aNd TechnOlogies/ESPERANTO
Tipo y forma: Article (Published version)
Área (Departamento): Área Teoría Señal y Comunicac. (Dpto. Ingeniería Electrón.Com.)
Exportado de SIDERAL (2023-10-23-11:05:41)


Visitas y descargas

Este artículo se encuentra en las siguientes colecciones:
articulos



 Notice créée le 2023-10-23, modifiée le 2023-10-23


Versión publicada:
 PDF
Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)