<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1109/ICASSP.2013.6638988</dc:identifier><dc:language>eng</dc:language><dc:creator>Martinez, D.</dc:creator><dc:creator>Lleida, E.</dc:creator><dc:creator>Ortega, A.</dc:creator><dc:creator>Miguel, A.</dc:creator><dc:title>Prosodic features and formant modeling for an ivector-based language recognition system</dc:title><dc:identifier>ART-2013-84598</dc:identifier><dc:description>The prosody of a language is encoded in syllable length, loudness and pitch. These attributes make humans perceive rhythm, stress and intonation in speech. Depending on the language, these speech properties vary, making language classification possible. On the other hand, formants are the resonance frequencies of the vocal tract, depend heavily on the position adopted by the articulatory organs, and are especially useful to disambiguate vowel sounds. In this paper prosodic and formant information are combined to build a generative language identification system based on Gaussian models fed with iVectors. The system is evaluated on the NIST LRE09 database and the inclusion of formant information gives about 50% relative improvement for the 30 s task over a prosodic system without it. The fusion with a state-of-the-art acoustic system based on shifted delta cepstral coefficients (SDC) shows the complementarity of both approaches.</dc:description><dc:date>2013</dc:date><dc:source>http://zaguan.unizar.es/record/162481</dc:source><dc:doi>10.1109/ICASSP.2013.6638988</dc:doi><dc:identifier>http://zaguan.unizar.es/record/162481</dc:identifier><dc:identifier>oai:zaguan.unizar.es:162481</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/TIN2011-28169-C05-02</dc:relation><dc:identifier.citation>Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing 2013 (2013), 6847-6851</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/closedAccess</dc:rights></dc:dc>

</collection>