Resumen: Understanding and modeling the dynamics of human gaze behavior in 360° environments is crucial for creating, improving, and developing emerging virtual reality applications. However, recruiting human observers and acquiring enough data to analyze their behavior when exploring virtual environments requires complex hardware and software setups, and can be time-consuming. Being able to generate virtual observers can help overcome this limitation, and thus stands as an open problem in this medium. Particularly, generative adversarial approaches could alleviate this challenge by generating a large number of scanpaths that reproduce human behavior when observing new scenes, essentially mimicking virtual observers. However, existing methods for scanpath generation do not adequately predict realistic scanpaths for 360° images. We present ScanGAN360, a new generative adversarial approach to address this problem. We propose a novel loss function based on dynamic time warping and tailor our network to the specifics of 360° images. The quality of our generated scanpaths outperforms competing approaches by a large margin, and is almost on par with the human baseline. ScanGAN360 allows fast simulation of large numbers of virtual observers, whose behavior mimics real users, enabling a better understanding of gaze behavior, facilitating experimentation, and aiding novel applications in virtual reality and beyond. Idioma: Inglés DOI: 10.1109/TVCG.2022.3150502 Año: 2022 Publicado en: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 28, 5 (2022), 2003-2013 ISSN: 1077-2626 Factor impacto JCR: 5.2 (2022) Categ. JCR: COMPUTER SCIENCE, SOFTWARE ENGINEERING rank: 15 / 108 = 0.139 (2022) - Q1 - T1 Factor impacto CITESCORE: 10.5 - Computer Science (Q1)