000136201 001__ 136201
000136201 005__ 20260217205546.0
000136201 0247_ $$2doi$$a10.1016/j.cag.2024.103983
000136201 0248_ $$2sideral$$a139083
000136201 037__ $$aART-2024-139083
000136201 041__ $$aeng
000136201 100__ $$0(orcid)0000-0002-0073-6398$$aMartín, Daniel$$uUniversidad de Zaragoza
000136201 245__ $$atSPM-Net: A probabilistic spatio-temporal approach for scanpath prediction
000136201 260__ $$c2024
000136201 5060_ $$aAccess copy available to the general public$$fUnrestricted
000136201 5203_ $$aPredicting the path followed by the viewer’s eyes when observing an image (a scanpath) is a challenging problem, particularly due to the inter- and intra-observer variability and the spatio-temporal dependencies of the visual attention process. Most existing approaches have focused on progressively optimizing the prediction of a gaze point given the previous ones. In this work we propose instead a probabilistic approach, which we call tSPM-Net. We build our method to account for observers’ variability by resorting to Bayesian deep learning and a probabilistic approach. Besides, we optimize our model to jointly consider both spatial and temporal dimensions of scanpaths using a novel spatio-temporal loss function based on a combination of Kullback–Leibler divergence and dynamic time warping. Our tSPM-Net yields results that outperform those of current state-of-the-art approaches, and are closer to the human baseline, suggesting that our model is able to generate scanpaths whose behavior closely resembles those of the real ones.
000136201 536__ $$9info:eu-repo/grantAgreement/ES/AEI/PID2022-141539NB-I00$$9info:eu-repo/grantAgreement/ES/DGA/T34-20R$$9info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME
000136201 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc$$uhttps://creativecommons.org/licenses/by-nc/4.0/deed.es
000136201 590__ $$a2.8$$b2024
000136201 592__ $$a0.569$$b2024
000136201 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b53 / 129 = 0.411$$c2024$$dQ2$$eT2
000136201 593__ $$aComputer Graphics and Computer-Aided Design$$c2024$$dQ2
000136201 593__ $$aComputer Vision and Pattern Recognition$$c2024$$dQ2
000136201 593__ $$aSoftware$$c2024$$dQ2
000136201 593__ $$aEngineering (miscellaneous)$$c2024$$dQ2
000136201 593__ $$aSignal Processing$$c2024$$dQ2
000136201 593__ $$aHuman-Computer Interaction$$c2024$$dQ3
000136201 594__ $$a6.1$$b2024
000136201 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000136201 700__ $$0(orcid)0000-0002-7503-7022$$aGutiérrez, Diego$$uUniversidad de Zaragoza
000136201 700__ $$0(orcid)0000-0003-0060-7278$$aMasia, Belén$$uUniversidad de Zaragoza
000136201 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000136201 773__ $$g122 (2024), 103983 [9 pp.]$$pComput. graph.$$tCOMPUTERS & GRAPHICS-UK$$x0097-8493
000136201 8564_ $$s3181353$$uhttps://zaguan.unizar.es/record/136201/files/texto_completo.pdf$$yVersión publicada
000136201 8564_ $$s2919727$$uhttps://zaguan.unizar.es/record/136201/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000136201 909CO $$ooai:zaguan.unizar.es:136201$$particulos$$pdriver
000136201 951__ $$a2026-02-17-20:38:09
000136201 980__ $$aARTICLE