000127930 001__ 127930
000127930 005__ 20231023120955.0
000127930 0247_ $$2doi$$a10.3390/app13169062
000127930 0248_ $$2sideral$$a135096
000127930 037__ $$aART-2023-135096
000127930 041__ $$aeng
000127930 100__ $$aPastor, Miguel A.
000127930 245__ $$aCross-corpus training strategy for speech emotion recognition using self-supervised representations
000127930 260__ $$c2023
000127930 5060_ $$aAccess copy available to the general public$$fUnrestricted
000127930 5203_ $$aSpeech Emotion Recognition (SER) plays a crucial role in applications involving human-machine interaction. However, the scarcity of suitable emotional speech datasets presents a major challenge for accurate SER systems. Deep Neural Network (DNN)-based solutions currently in use require substantial labelled data for successful training. Previous studies have proposed strategies to expand the training set in this framework by leveraging available emotion speech corpora. This paper assesses the impact of a cross-corpus training extension for a SER system using self-supervised (SS) representations, namely HuBERT and WavLM. The feasibility of training systems with just a few minutes of in-domain audio is also analyzed. The experimental results demonstrate that augmenting the training set with EmoDB (German), RAVDESS, and CREMA-D (English) datasets leads to improved SER accuracy on the IEMOCAP dataset. By combining a cross-corpus training extension and SS representations, state-of-the-art performance is achieved. These findings suggest that the cross-corpus strategy effectively addresses the scarcity of labelled data and enhances the performance of SER systems.
000127930 536__ $$9info:eu-repo/grantAgreement/ES/AEI/PDC2021-120846-C41$$9info:eu-repo/grantAgreement/ES/AEI/PID2021-126061OB-C44$$9info:eu-repo/grantAgreement/ES/DGA/T36-20R$$9info:eu-repo/grantAgreement/EC/H2020/101007666/EU/Exchanges for SPEech ReseArch aNd TechnOlogies/ESPERANTO$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 101007666-ESPERANTO
000127930 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000127930 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000127930 700__ $$aRibas, Dayana
000127930 700__ $$0(orcid)0000-0002-3886-7748$$aOrtega, Alfonso$$uUniversidad de Zaragoza
000127930 700__ $$0(orcid)0000-0001-5803-4316$$aMiguel, Antonio$$uUniversidad de Zaragoza
000127930 700__ $$0(orcid)0000-0001-9137-4013$$aLleida, Eduardo$$uUniversidad de Zaragoza
000127930 7102_ $$15008$$2800$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Teoría Señal y Comunicac.
000127930 773__ $$g13, 16 (2023), 9062 [15 pp]$$pAppl. sci.$$tApplied Sciences (Switzerland)$$x2076-3417
000127930 8564_ $$s1093005$$uhttps://zaguan.unizar.es/record/127930/files/texto_completo.pdf$$yVersión publicada
000127930 8564_ $$s2814488$$uhttps://zaguan.unizar.es/record/127930/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000127930 909CO $$ooai:zaguan.unizar.es:127930$$particulos$$pdriver
000127930 951__ $$a2023-10-23-11:05:41
000127930 980__ $$aARTICLE