000063150 001__ 63150
000063150 005__ 20190709135506.0
000063150 0247_ $$2doi$$a10.1186/s13636-017-0119-z
000063150 0248_ $$2sideral$$a101866
000063150 037__ $$aART-2017-101866
000063150 041__ $$aeng
000063150 100__ $$aTejedor, J.
000063150 245__ $$aALBAYZIN 2016 spoken term detection evaluation: an international open competitive evaluation in Spanish
000063150 260__ $$c2017
000063150 5060_ $$aAccess copy available to the general public$$fUnrestricted
000063150 5203_ $$aWithin search-on-speech, Spoken Term Detection (STD) aims to retrieve data from a speech repository given a textual representation of a search term. This paper presents an international open evaluation for search-on-speech based on STD in Spanish and an analysis of the results. The evaluation has been designed carefully so that several analyses of the main results can be carried out. The evaluation consists in retrieving the speech files that contain the search terms, providing their start and end times, and a score value that reflects the confidence given to the detection. Two different Spanish speech databases have been employed in the evaluation: MAVIR database, which comprises a set of talks from workshops, and EPIC database, which comprises a set of European Parliament sessions in Spanish. We present the evaluation itself, both databases, the evaluation metric, the systems submitted to the evaluation, the results, and a detailed discussion. Five different research groups took part in the evaluation, and ten different systems were submitted in total. We compare the systems submitted to the evaluation and make a deep analysis based on some search term properties (term length, within-vocabulary/out-of-vocabulary terms, single-word/multi-word terms, and native (Spanish)/foreign terms).
000063150 536__ $$9info:eu-repo/grantAgreement/ES/MINECO/TEC2015-67163-C2-1-R$$9info:eu-repo/grantAgreement/ES/MINECO/TEC2015-68172-C2-1-P$$9info:eu-repo/grantAgreement/ES/MINECO/TIN2014-54288-C4-1-R
000063150 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000063150 590__ $$a3.057$$b2017
000063150 591__ $$aENGINEERING, ELECTRICAL & ELECTRONIC$$b62 / 260 = 0.238$$c2017$$dQ1$$eT1
000063150 591__ $$aACOUSTICS$$b4 / 31 = 0.129$$c2017$$dQ1$$eT1
000063150 592__ $$a0.337$$b2017
000063150 593__ $$aElectrical and Electronic Engineering$$c2017$$dQ2
000063150 593__ $$aAcoustics and Ultrasonics$$c2017$$dQ2
000063150 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000063150 700__ $$aToledano, D.T.
000063150 700__ $$aLopez-Otero, P.
000063150 700__ $$aDocio-Fernandez, L.
000063150 700__ $$aSerrano, L.
000063150 700__ $$aHernaez, I.
000063150 700__ $$aCoucheiro-Limeres, A.
000063150 700__ $$aFerreiros, J.
000063150 700__ $$0(orcid)0000-0002-0261-3877$$aOlcoz, J.$$uUniversidad de Zaragoza
000063150 700__ $$aLlombart, J.
000063150 7102_ $$15008$$2800$$aUniversidad de Zaragoza$$bDpto. Ingeniería Electrón.Com.$$cÁrea Teoría Señal y Comunicac.
000063150 773__ $$g2017, 1 (2017), [23 pp]$$pEURASIP j. audio, speech music. process.$$tEURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING$$x1687-4714
000063150 8564_ $$s2157837$$uhttps://zaguan.unizar.es/record/63150/files/texto_completo.pdf$$yVersión publicada
000063150 8564_ $$s107318$$uhttps://zaguan.unizar.es/record/63150/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000063150 909CO $$ooai:zaguan.unizar.es:63150$$particulos$$pdriver
000063150 951__ $$a2019-07-09-11:48:33
000063150 980__ $$aARTICLE