<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1016/j.cag.2024.103983</dc:identifier><dc:language>eng</dc:language><dc:creator>Martín, Daniel</dc:creator><dc:creator>Gutiérrez, Diego</dc:creator><dc:creator>Masia, Belén</dc:creator><dc:title>tSPM-Net: A probabilistic spatio-temporal approach for scanpath prediction</dc:title><dc:identifier>ART-2024-139083</dc:identifier><dc:description>Predicting the path followed by the viewer’s eyes when observing an image (a scanpath) is a challenging problem, particularly due to the inter- and intra-observer variability and the spatio-temporal dependencies of the visual attention process. Most existing approaches have focused on progressively optimizing the prediction of a gaze point given the previous ones. In this work we propose instead a probabilistic approach, which we call tSPM-Net. We build our method to account for observers’ variability by resorting to Bayesian deep learning and a probabilistic approach. Besides, we optimize our model to jointly consider both spatial and temporal dimensions of scanpaths using a novel spatio-temporal loss function based on a combination of Kullback–Leibler divergence and dynamic time warping. Our tSPM-Net yields results that outperform those of current state-of-the-art approaches, and are closer to the human baseline, suggesting that our model is able to generate scanpaths whose behavior closely resembles those of the real ones.</dc:description><dc:date>2024</dc:date><dc:source>http://zaguan.unizar.es/record/136201</dc:source><dc:doi>10.1016/j.cag.2024.103983</dc:doi><dc:identifier>http://zaguan.unizar.es/record/136201</dc:identifier><dc:identifier>oai:zaguan.unizar.es:136201</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/AEI/PID2022-141539NB-I00</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/DGA/T34-20R</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</dc:relation><dc:identifier.citation>COMPUTERS &amp; GRAPHICS-UK 122 (2024), 103983 [9 pp.]</dc:identifier.citation><dc:rights>by-nc</dc:rights><dc:rights>https://creativecommons.org/licenses/by-nc/4.0/deed.es</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>