Resumen: In this paper, we explore an approach based on memory layers and multi-head attention mechanisms to improve in an efficient way the performance of text-dependent speaker verification (SV) systems. The most extended SV systems based on Deep Neural Networks (DNN) extract the embedding of the utterance from the average pooling of the temporal dimension after processing. Unlike previous works, we can exploit the phonetic knowledge needed for text-dependent SV systems by combining the temporal attention of multiple parallel heads with the phonetic embeddings extracted from a phonetic classification network, which helps to guide to the attention mechanism with the role of the positional embedding. The addition of a memory layer to a text-dependent SV system was tested on the RSR2015-part II and DeepMine-part I databases, where, in both cases outperformed the baseline result and the reference system based on the same transformer network without the memory layer. Idioma: Inglés DOI: 10.1109/ICASSP39728.2021.9414859 Año: 2021 Publicado en: Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing 2021 (2021), 6154-6158 ISSN: 0736-7791 Financiación: info:eu-repo/grantAgreement/ES/DGA/T36-20R Financiación: info:eu-repo/grantAgreement/ES/MINECO/TIN2017-85854-C4-1-R Tipo y forma: Artículo (PostPrint) Área (Departamento): Área Teoría Señal y Comunicac. (Dpto. Ingeniería Electrón.Com.)