<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1109/TVCG.2023.3320259</dc:identifier><dc:language>eng</dc:language><dc:creator>Malpica, Sandra</dc:creator><dc:creator>Martín, Daniel</dc:creator><dc:creator>Serrano, Ana</dc:creator><dc:creator>Gutiérrez, Diego</dc:creator><dc:creator>Masia, Belén</dc:creator><dc:title>Task-dependent visual behavior in immersive environments: a comparative study of free exploration, memory and visual search</dc:title><dc:identifier>ART-2023-135292</dc:identifier><dc:description>Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.</dc:description><dc:date>2023</dc:date><dc:source>http://zaguan.unizar.es/record/129667</dc:source><dc:doi>10.1109/TVCG.2023.3320259</dc:doi><dc:identifier>http://zaguan.unizar.es/record/129667</dc:identifier><dc:identifier>oai:zaguan.unizar.es:129667</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/AEI/PID2019-105004GB-I00</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/AEI/PID2022-141539NB-I00</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</dc:relation><dc:identifier.citation>IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 29, 11 (2023), 4417-4425</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>