000135931 001__ 135931
000135931 005__ 20240627150549.0
000135931 0247_ $$2doi$$a10.1016/j.jvoice.2024.03.001
000135931 0248_ $$2sideral$$a138888
000135931 037__ $$aART-2024-138888
000135931 041__ $$aeng
000135931 100__ $$aVidal, Jazmin
000135931 245__ $$aAutomatic voice disorder detection from a practical perspective
000135931 260__ $$c2024
000135931 5060_ $$aAccess copy available to the general public$$fUnrestricted
000135931 5203_ $$aVoice disorders, such as dysphonia, are common among the general population. These pathologies often remain untreated until they reach a high level of severity. Assisting the detection of voice disorders could facilitate early diagnosis and subsequent treatment. In this study, we address the practical aspects of automatic voice disorders detection (AVDD). In real-world scenarios, data annotated for voice disorders is usually scarce due to various challenges involved in the collection and annotation of such data. However, some relatively large datasets are available for a reduced number of domains. In this context, we propose the use of a combination of out-of-domain and in-domain data for training a deep neural network-based AVDD system, and offer guidance on the minimum amount of in-domain data required to achieve acceptable performance. Further, we propose the use of a cost-based metric, the normalized expected cost (EC), to evaluate performance of AVDD systems in a way that closely reflects the needs of the application. As an added benefit, optimal decisions for the EC can be made in a principled way given by Bayes decision theory. Finally, we argue that for medical applications like AVDD, the categorical decisions need to be accompanied by interpretable scores that reflect the confidence of the system. Even very accurate models often produce scores that are not suited for interpretation. Here, we show that such models can be easily improved by adding a calibration stage-trained with just a few minutes of in-domain data. The outputs of the resulting calibrated system can then better support practitioners in their decision-making process.
000135931 536__ $$9info:eu-repo/grantAgreement/ES/AEI/PID2021-126061OB-C44$$9info:eu-repo/grantAgreement/ES/DGA/T36-23R$$9info:eu-repo/grantAgreement/EC/H2020/101007666/EU/Exchanges for SPEech ReseArch aNd TechnOlogies/ESPERANTO$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 101007666-ESPERANTO
000135931 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000135931 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000135931 700__ $$0(orcid)0000-0003-3813-4998$$aRibas, Dayana
000135931 700__ $$aBonomi, Cyntia
000135931 700__ $$0(orcid)0000-0001-9137-4013$$aLleida, Eduardo$$uUniversidad de Zaragoza
000135931 700__ $$aFerrer, Luciana
000135931 700__ $$0(orcid)0000-0002-3886-7748$$aOrtega, Alfonso$$uUniversidad de Zaragoza
000135931 7102_ $$15008$$2800$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Teoría Señal y Comunicac.
000135931 773__ $$g(2024), [16 pp.]$$pJ. voice$$tJOURNAL OF VOICE$$x0892-1997
000135931 8564_ $$s732864$$uhttps://zaguan.unizar.es/record/135931/files/texto_completo.pdf$$yPostprint
000135931 8564_ $$s2244007$$uhttps://zaguan.unizar.es/record/135931/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000135931 909CO $$ooai:zaguan.unizar.es:135931$$particulos$$pdriver
000135931 951__ $$a2024-06-27-13:20:52
000135931 980__ $$aARTICLE