000128135 001__ 128135
000128135 005__ 20241125101145.0
000128135 0247_ $$2doi$$a10.1109/LRA.2023.3290512
000128135 0248_ $$2sideral$$a135271
000128135 037__ $$aART-2023-135271
000128135 041__ $$aeng
000128135 100__ $$aBavle, Hriday
000128135 245__ $$aS-Graphs+: Real-Time Localization and Mapping Leveraging Hierarchical Representations
000128135 260__ $$c2023
000128135 5060_ $$aAccess copy available to the general public$$fUnrestricted
000128135 5203_ $$aIn this letter, we present an evolved version of Situational Graphs, which jointly models in a single optimizable factor graph (1) a pose graph, as a set of robot keyframes comprising associated measurements and robot poses, and (2) a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between them. Specifically, our S-Graphs+ is a novel four-layered factor graph that includes: (1) A keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging high-level information of the environment. To extract this high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets, including simulated and real data of indoor environments from varying construction sites, and on a real public dataset of several indoor office areas. On average over our datasets, S-Graphs+ outperforms the accuracy of the second-best method by a margin of 10.67%, while extending the robot situational awareness by a richer scene model. Moreover, we make the software available as a docker file.
000128135 536__ $$9info:eu-repo/grantAgreement/ES/DGA/T45-17R$$9info:eu-repo/grantAgreement/ES/MICINN/PID2021-127685NB-I00
000128135 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000128135 590__ $$a4.6$$b2023
000128135 592__ $$a2.119$$b2023
000128135 591__ $$aROBOTICS$$b12 / 46 = 0.261$$c2023$$dQ2$$eT1
000128135 593__ $$aArtificial Intelligence$$c2023$$dQ1
000128135 593__ $$aBiomedical Engineering$$c2023$$dQ1
000128135 593__ $$aComputer Science Applications$$c2023$$dQ1
000128135 593__ $$aMechanical Engineering$$c2023$$dQ1
000128135 593__ $$aControl and Optimization$$c2023$$dQ1
000128135 593__ $$aControl and Systems Engineering$$c2023$$dQ1
000128135 593__ $$aHuman-Computer Interaction$$c2023$$dQ1
000128135 593__ $$aComputer Vision and Pattern Recognition$$c2023$$dQ1
000128135 594__ $$a9.6$$b2023
000128135 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000128135 700__ $$aSanchez-Lopez, Jose Luis
000128135 700__ $$aShaheer, Muhammad
000128135 700__ $$0(orcid)0000-0003-1368-1151$$aCivera, Javier$$uUniversidad de Zaragoza
000128135 700__ $$aVoos, Holger
000128135 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000128135 773__ $$g8, 8 (2023), 4927-4934$$pIEEE Robot. autom. let.$$tIEEE Robotics and Automation Letters$$x2377-3766
000128135 8564_ $$s2457265$$uhttps://zaguan.unizar.es/record/128135/files/texto_completo.pdf$$yVersión publicada
000128135 8564_ $$s3718939$$uhttps://zaguan.unizar.es/record/128135/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000128135 909CO $$ooai:zaguan.unizar.es:128135$$particulos$$pdriver
000128135 951__ $$a2024-11-22-12:04:13
000128135 980__ $$aARTICLE