000132229 001__ 132229 000132229 005__ 20260217205456.0 000132229 0247_ $$2doi$$a10.1007/s00371-023-03206-0 000132229 0248_ $$2sideral$$a137480 000132229 037__ $$aART-2024-137480 000132229 041__ $$aeng 000132229 100__ $$0(orcid)0000-0002-0073-6398$$aMartin, Daniel$$uUniversidad de Zaragoza 000132229 245__ $$aSAL3D: a model for saliency prediction in 3D meshes 000132229 260__ $$c2024 000132229 5060_ $$aAccess copy available to the general public$$fUnrestricted 000132229 5203_ $$aAdvances in virtual and augmented reality have increased the demand for immersive and engaging 3D experiences. To create such experiences, it is crucial to understand visual attention in 3D environments, which is typically modeled by means of saliency maps. While attention in 2D images and traditional media has been widely studied, there is still much to explore in 3D settings. In this work, we propose a deep learning-based model for predicting saliency when viewing 3D objects, which is a first step toward understanding and predicting attention in 3D environments. Previous approaches rely solely on low-level geometric cues or unnatural conditions, however, our model is trained on a dataset of real viewing data that we have manually captured, which indeed reflects actual human viewing behavior. Our approach outperforms existing state-of-the-art methods and closely approximates the ground-truth data. Our results demonstrate the effectiveness of our approach in predicting attention in 3D objects, which can pave the way for creating more immersive and engaging 3D experiences. 000132229 536__ $$9info:eu-repo/grantAgreement/ES/AEI/PID2019-105004GB-I00$$9info:eu-repo/grantAgreement/ES/DGA/T34-20R$$9info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON$$9info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME$$9info:eu-repo/grantAgreement/ES/MICINN/PID2022-141766OB-I00 000132229 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttps://creativecommons.org/licenses/by/4.0/deed.es 000132229 590__ $$a2.9$$b2024 000132229 592__ $$a0.637$$b2024 000132229 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b49 / 129 = 0.38$$c2024$$dQ2$$eT2 000132229 593__ $$aComputer Graphics and Computer-Aided Design$$c2024$$dQ2 000132229 593__ $$aSoftware$$c2024$$dQ2 000132229 593__ $$aComputer Vision and Pattern Recognition$$c2024$$dQ2 000132229 594__ $$a6.0$$b2024 000132229 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion 000132229 700__ $$aFandos, Andres 000132229 700__ $$0(orcid)0000-0003-0060-7278$$aMasia, Belen$$uUniversidad de Zaragoza 000132229 700__ $$0(orcid)0000-0002-7796-3177$$aSerrano, Ana$$uUniversidad de Zaragoza 000132229 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf. 000132229 773__ $$g40 (2024), 7761–7771$$pVis. comput.$$tVISUAL COMPUTER$$x0178-2789 000132229 8564_ $$s3524887$$uhttps://zaguan.unizar.es/record/132229/files/texto_completo.pdf$$yVersión publicada 000132229 8564_ $$s2350100$$uhttps://zaguan.unizar.es/record/132229/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada 000132229 909CO $$ooai:zaguan.unizar.es:132229$$particulos$$pdriver 000132229 951__ $$a2026-02-17-20:20:35 000132229 980__ $$aARTICLE