000117683 001__ 117683 000117683 005__ 20240319081002.0 000117683 0247_ $$2doi$$a10.1109/TVCG.2022.3150502 000117683 0248_ $$2sideral$$a128178 000117683 037__ $$aART-2022-128178 000117683 041__ $$aeng 000117683 100__ $$0(orcid)0000-0002-0073-6398$$aMartin Serrano, D.$$uUniversidad de Zaragoza 000117683 245__ $$aScanGAN360: a generative model of realistic scanpaths for 360 images 000117683 260__ $$c2022 000117683 5060_ $$aAccess copy available to the general public$$fUnrestricted 000117683 5203_ $$aUnderstanding and modeling the dynamics of human gaze behavior in 360° environments is crucial for creating, improving, and developing emerging virtual reality applications. However, recruiting human observers and acquiring enough data to analyze their behavior when exploring virtual environments requires complex hardware and software setups, and can be time-consuming. Being able to generate virtual observers can help overcome this limitation, and thus stands as an open problem in this medium. Particularly, generative adversarial approaches could alleviate this challenge by generating a large number of scanpaths that reproduce human behavior when observing new scenes, essentially mimicking virtual observers. However, existing methods for scanpath generation do not adequately predict realistic scanpaths for 360° images. We present ScanGAN360, a new generative adversarial approach to address this problem. We propose a novel loss function based on dynamic time warping and tailor our network to the specifics of 360° images. The quality of our generated scanpaths outperforms competing approaches by a large margin, and is almost on par with the human baseline. ScanGAN360 allows fast simulation of large numbers of virtual observers, whose behavior mimics real users, enabling a better understanding of gaze behavior, facilitating experimentation, and aiding novel applications in virtual reality and beyond. 000117683 536__ $$9info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON 000117683 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc-nd$$uhttp://creativecommons.org/licenses/by-nc-nd/3.0/es/ 000117683 590__ $$a5.2$$b2022 000117683 592__ $$a1.515$$b2022 000117683 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b15 / 108 = 0.139$$c2022$$dQ1$$eT1 000117683 593__ $$aComputer Graphics and Computer-Aided Design$$c2022$$dQ1 000117683 593__ $$aSoftware$$c2022$$dQ1 000117683 593__ $$aSignal Processing$$c2022$$dQ1 000117683 593__ $$aComputer Vision and Pattern Recognition$$c2022$$dQ1 000117683 594__ $$a10.5$$b2022 000117683 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion 000117683 700__ $$0(orcid)0000-0002-7796-3177$$aSerrano Pacheu, A.$$uUniversidad de Zaragoza 000117683 700__ $$aBergman, A. W. 000117683 700__ $$aWetzstein, G. 000117683 700__ $$0(orcid)0000-0003-0060-7278$$aMasia Corcoy, B.$$uUniversidad de Zaragoza 000117683 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf. 000117683 773__ $$g28, 5 (2022), 2003-2013$$pIEEE trans. vis. comput. graph.$$tIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS$$x1077-2626 000117683 8564_ $$s18425512$$uhttps://zaguan.unizar.es/record/117683/files/texto_completo.pdf$$yPostprint 000117683 8564_ $$s3357956$$uhttps://zaguan.unizar.es/record/117683/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint 000117683 909CO $$ooai:zaguan.unizar.es:117683$$particulos$$pdriver 000117683 951__ $$a2024-03-18-14:14:38 000117683 980__ $$aARTICLE