000146018 001__ 146018 000146018 005__ 20241122150934.0 000146018 037__ $$aTAZ-TFM-2023-343 000146018 041__ $$aeng 000146018 1001_ $$aGarcía Hernández, Alberto 000146018 24200 $$aMulti-modal Place Recognition in Aliased and Low-Texture Environments. 000146018 24500 $$aReconocimiento multimodal de lugares en entornos sin textura. 000146018 260__ $$aZaragoza$$bUniversidad de Zaragoza$$c2023 000146018 506__ $$aby-nc-sa$$bCreative Commons$$c3.0$$uhttp://creativecommons.org/licenses/by-nc-sa/3.0/ 000146018 520__ $$aIn planetary environments with extreme visual aliasing, traditional place recognition systems for robots encounter diculties in unstructured and aliased environments. E↵ective place recognition is essential for robust localization and mapping, which, in turn, significantly impacts the performance of Simultaneous Localization and Mapping (SLAM) systems. This research aims to enhance existing place recognition systems by utilizing both LiDAR and visual information, improving performance in extreme environments. The use of LiDAR is crucial, as it provides valuable geometric data that complements visual data, resulting in more expressive and robust 3D grounded global features. We evaluated our methods using the Mt. Etna dataset and a synthetic dataset generated with the OAISYS tool. Our comprehensive review of state-of-the-art place recognition systems led to the development of a novel UMF (Unifying Local and Global Multimodal Features with Transformers) model, specifically designed for place recognition in environments with extreme aliasing. The UMF model integrates elements from the most advanced methods, enhancing performance in challenging environments by capturing intricate relationships between local and global context in both LiDAR and visual data. Two variants of the UMF model were explored, o↵ering alternative ways of processing and utilizing fine local features. Our UMF model outperforms other state-of-the-art methods in place recognition tasks, demonstrating the project’s success. The improved place recognition capabilities o↵ered by the UMF model can contribute to more accurate and robust SLAM systems, enabling robots to better navigate and explore unstructured and aliased environments. This research highlights the importance of multi-modal fusion, particularly the integration of LiDAR and visual data, in addressing the challenges of place recognition in aliased and low-texture environments. It also opens an exciting line of research focus in unified fusion multimodal approaches for robotics, computer vision, and machine learning applications, with a direct impact on SLAM and other related fields.<br /><br /> 000146018 521__ $$aMáster Universitario en Robótica, Gráficos y Visión por Computador 000146018 540__ $$aDerechos regulados por licencia Creative Commons 000146018 700__ $$aStrobl, Klaus H.$$edir. 000146018 700__ $$aGiubilato, Riccardo$$edir. 000146018 7102_ $$aUniversidad de Zaragoza$$bInformática e Ingeniería de Sistemas$$cIngeniería de Sistemas y Automática 000146018 7202_ $$aCivera Sancho, Javier$$eponente 000146018 8560_ $$f741363@unizar.es 000146018 8564_ $$s114306$$uhttps://zaguan.unizar.es/record/146018/files/TAZ-TFM-2023-343_ANE.pdf$$yAnexos (eng) 000146018 8564_ $$s15451373$$uhttps://zaguan.unizar.es/record/146018/files/TAZ-TFM-2023-343.pdf$$yMemoria (eng) 000146018 909CO $$ooai:zaguan.unizar.es:146018$$pdriver$$ptrabajos-fin-master 000146018 950__ $$a 000146018 951__ $$adeposita:2024-11-22 000146018 980__ $$aTAZ$$bTFM$$cEINA 000146018 999__ $$a20230607234415.CREATION_DATE