000149148 001__ 149148
000149148 005__ 20250126114555.0
000149148 0247_ $$2doi$$a10.1109/LRA.2020.2967274
000149148 0248_ $$2sideral$$a117191
000149148 037__ $$aART-2020-117191
000149148 041__ $$aeng
000149148 100__ $$0(orcid)0000-0002-3355-6780$$aFernandez-Labrador, Clara
000149148 245__ $$aCorners for Layout: End-to-End Layout Recovery from 360 Images
000149148 260__ $$c2020
000149148 5060_ $$aAccess copy available to the general public$$fUnrestricted
000149148 5203_ $$aThe problem of 3D layout recovery in indoor scenes has been a core research topic for over a decade. However, there are still several major challenges that remain unsolved. Among the most relevant ones, a major part of the state-of-the-art methods make implicit or explicit assumptions on the scenes -e.g. box-shaped or Manhattan layouts. Also, current methods are computationally expensive and not suitable for real-time applications like robot navigation and AR/VR. In this work we present CFL (Corners for Layout), the first end-to-end model that predicts layout corners for 3D layout recovery on mathbf {{360}circ } images. Our experimental results show that we outperform the state of the art, making less assumptions on the scene than other works, and with lower cost. We also show that our model generalizes better to camera position variations than conventional approaches by using EquiConvs, a convolution applied directly on the spherical projection and hence invariant to the equirectangular distortions.
000149148 536__ $$9info:eu-repo/grantAgreement/ES/AEI-FEDER/RTI2018-096903-B-I00$$9info:eu-repo/grantAgreement/ES/DGA/T45-17R$$9info:eu-repo/grantAgreement/ES/MCIU-AEI-FEDER/PGC2018-096367-B-I00
000149148 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000149148 590__ $$a3.741$$b2020
000149148 591__ $$aROBOTICS$$b9 / 28 = 0.321$$c2020$$dQ2$$eT1
000149148 592__ $$a1.123$$b2020
000149148 593__ $$aArtificial Intelligence$$c2020$$dQ1
000149148 593__ $$aBiomedical Engineering$$c2020$$dQ1
000149148 593__ $$aComputer Science Applications$$c2020$$dQ1
000149148 593__ $$aMechanical Engineering$$c2020$$dQ1
000149148 593__ $$aControl and Optimization$$c2020$$dQ1
000149148 593__ $$aControl and Systems Engineering$$c2020$$dQ1
000149148 593__ $$aHuman-Computer Interaction$$c2020$$dQ1
000149148 593__ $$aComputer Vision and Pattern Recognition$$c2020$$dQ1
000149148 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/submittedVersion
000149148 700__ $$aFacil, José M.$$uUniversidad de Zaragoza
000149148 700__ $$0(orcid)0000-0002-8949-2632$$aPerez-Yus, Alejandro$$uUniversidad de Zaragoza
000149148 700__ $$aDemonceaux, Cédric
000149148 700__ $$0(orcid)0000-0003-1368-1151$$aCivera, Javier$$uUniversidad de Zaragoza
000149148 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, José J.$$uUniversidad de Zaragoza
000149148 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000149148 773__ $$g5, 2 (2020), 1255-1262$$pIEEE Robot. autom. let.$$tIEEE Robotics and Automation Letters$$x2377-3766
000149148 8564_ $$s1070789$$uhttps://zaguan.unizar.es/record/149148/files/texto_completo.pdf$$yPreprint
000149148 8564_ $$s3127822$$uhttps://zaguan.unizar.es/record/149148/files/texto_completo.jpg?subformat=icon$$xicon$$yPreprint
000149148 909CO $$ooai:zaguan.unizar.es:149148$$particulos$$pdriver
000149148 951__ $$a2025-01-26-10:47:29
000149148 980__ $$aARTICLE