000149147 001__ 149147
000149147 005__ 20251017144550.0
000149147 0247_ $$2doi$$a10.1016/j.imavis.2019.04.008
000149147 0248_ $$2sideral$$a113202
000149147 037__ $$aART-2019-113202
000149147 041__ $$aeng
000149147 100__ $$0(orcid)0000-0002-8949-2632$$aPerez-Yus, Alejandro$$uUniversidad de Zaragoza
000149147 245__ $$aScaled layout recovery with wide field of view RGB-D
000149147 260__ $$c2019
000149147 5060_ $$aAccess copy available to the general public$$fUnrestricted
000149147 5203_ $$aIn this work, we propose a method that integrates depth and fisheye cameras to obtain a wide 3D scene reconstruction with scale in one single shot. The motivation of such integration is to overcome the narrow field of view in consumer RGB-D cameras and lack of depth and scale information in fisheye cameras. The hybrid camera system we use is easy to build and calibrate, and currently consumer devices with similar configuration are already available in the market. With this system, we have a portion of the scene with shared field of view that provides simultaneously color and depth. In the rest of the color image we estimate the depth by recovering the structural information of the scene. Our method finds and ranks corners in the scene combining the extraction of lines in the color image and the depth information. These corners are used to generate plausible layout hypotheses, which have real-world scale due to the usage of depth. The wide angle camera captures more information from the environment (e.g. the ceiling), which helps to overcome severe occlusions. After an automatic evaluation of the hypotheses, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. We show in our experiments with real images from both home-made and commercial systems that our method achieves high success ratio in different scenarios and that our hybrid camera system outperforms the single color camera set-up while additionally providing scale in one single shot. (C) 2019 Elsevier B.V. All rights reserved.
000149147 536__ $$9info:eu-repo/grantAgreement/ES/MINECO/BES-2013-065834$$9info:eu-repo/grantAgreement/ES/MINECO/DPI2014-61792-EXP
000149147 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000149147 590__ $$a3.103$$b2019
000149147 591__ $$aCOMPUTER SCIENCE, THEORY & METHODS$$b23 / 108 = 0.213$$c2019$$dQ1$$eT1
000149147 591__ $$aOPTICS$$b23 / 97 = 0.237$$c2019$$dQ1$$eT1
000149147 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b21 / 107 = 0.196$$c2019$$dQ1$$eT1
000149147 591__ $$aENGINEERING, ELECTRICAL & ELECTRONIC$$b87 / 265 = 0.328$$c2019$$dQ2$$eT1
000149147 591__ $$aCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE$$b48 / 136 = 0.353$$c2019$$dQ2$$eT2
000149147 592__ $$a1.032$$b2019
000149147 593__ $$aComputer Vision and Pattern Recognition$$c2019$$dQ1
000149147 593__ $$aSignal Processing$$c2019$$dQ1
000149147 593__ $$aElectrical and Electronic Engineering$$c2019$$dQ1
000149147 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000149147 700__ $$0(orcid)0000-0001-9347-5969$$aLópez-Nicolas, Gonzalo$$uUniversidad de Zaragoza
000149147 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, José J.$$uUniversidad de Zaragoza
000149147 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000149147 773__ $$g87 (2019), 76-96$$pImage vis. comput.$$tIMAGE AND VISION COMPUTING$$x0262-8856
000149147 8564_ $$s1385708$$uhttps://zaguan.unizar.es/record/149147/files/texto_completo.pdf$$yPostprint
000149147 8564_ $$s2415371$$uhttps://zaguan.unizar.es/record/149147/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000149147 909CO $$ooai:zaguan.unizar.es:149147$$particulos$$pdriver
000149147 951__ $$a2025-10-17-14:11:31
000149147 980__ $$aARTICLE