000131266 001__ 131266
000131266 005__ 20240207154753.0
000131266 0247_ $$2doi$$a10.1109/ICRA48891.2023.10161142
000131266 0248_ $$2sideral$$a136755
000131266 037__ $$aART-2023-136755
000131266 041__ $$aspa
000131266 100__ $$0(orcid)0000-0003-2674-4844$$aBerenguel-Baeta, Bruno$$uUniversidad de Zaragoza
000131266 245__ $$aFredsnet: joint monocular depth and semantic segmentation with fast fourier convolutions from single panoramas
000131266 260__ $$c2023
000131266 5060_ $$aAccess copy available to the general public$$fUnrestricted
000131266 5203_ $$aIn this work we present FreDSNet, a deep learning solution which obtains semantic 3D understanding of indoor environments from single panoramas. Omnidirectional images reveal task-specific advantages when addressing scene understanding problems due to the 360-degree contextual information about the entire environment they provide. However, the inherent characteristics of the omnidirectional images add additional problems to obtain an accurate detection and segmentation of objects or a good depth estimation. To overcome these problems, we exploit convolutions in the frequential domain obtaining a wider receptive field in each convolutional layer. These convolutions allow to leverage the whole context information from omnidirectional images. FreDSNet is the first network that jointly provides monocular depth estimation and semantic segmentation from a single panoramic image exploiting fast Fourier convolutions. Our experiments show that FreDSNet has slight better performance than the sole state-of-the-art method that obtains both semantic segmentation and depth estimation from panoramas. FreDSNet code is publicly available in https://github.com/Sbrunoberenguel/FreDSNet
000131266 536__ $$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-129410B-I00$$9info:eu-repo/grantAgreement/ES/UZ/JIUZ-2021-TEC-01
000131266 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000131266 655_4 $$ainfo:eu-repo/semantics/conferenceObject$$vinfo:eu-repo/semantics/acceptedVersion
000131266 700__ $$0(orcid)0000-0002-8479-1748$$aBermúdez-Cameo, Jesús$$uUniversidad de Zaragoza
000131266 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, José J.$$uUniversidad de Zaragoza
000131266 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000131266 773__ $$g(2023), 2152-4092$$pIEEE Int. conf. robot. autom.$$tIEEE International Conference on Robotics and Automation$$x2152-4092
000131266 8564_ $$s5183536$$uhttps://zaguan.unizar.es/record/131266/files/texto_completo.pdf$$yPostprint
000131266 8564_ $$s3167885$$uhttps://zaguan.unizar.es/record/131266/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000131266 909CO $$ooai:zaguan.unizar.es:131266$$particulos$$pdriver
000131266 951__ $$a2024-02-07-14:35:52
000131266 980__ $$aARTICLE