000132333 001__ 132333 000132333 005__ 20240731103414.0 000132333 0247_ $$2doi$$a10.1109/IROS55552.2023.10342253 000132333 0248_ $$2sideral$$a137286 000132333 037__ $$aART-2023-137286 000132333 041__ $$aeng 000132333 100__ $$aMorilla-Cabello, David$$uUniversidad de Zaragoza 000132333 245__ $$aRobust fusion for bayesian semantic mapping 000132333 260__ $$c2023 000132333 5203_ $$aThe integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness. To mitigate this issue we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods. Video - https://youtu.be/5xVGm7z9c-0 000132333 536__ $$9info:eu-repo/grantAgreement/ES/MCIU/FPU20-06563$$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/ES/MICINN/PID2021-125514NB-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-129410B-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-131150B-I00 000132333 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/ 000132333 592__ $$a1.094$$b2023 000132333 593__ $$aComputer Science Applications$$c2023 000132333 593__ $$aSoftware$$c2023 000132333 593__ $$aControl and Systems Engineering$$c2023 000132333 593__ $$aComputer Vision and Pattern Recognition$$c2023 000132333 594__ $$a4.4$$b2023 000132333 655_4 $$ainfo:eu-repo/semantics/conferenceObject$$vinfo:eu-repo/semantics/acceptedVersion 000132333 700__ $$aMur-Labadia, Lorenzo$$uUniversidad de Zaragoza 000132333 700__ $$0(orcid)0000-0002-6741-844X$$aMartinez-Cantin, Ruben$$uUniversidad de Zaragoza 000132333 700__ $$0(orcid)0000-0002-5176-3767$$aMontijano, Eduardo$$uUniversidad de Zaragoza 000132333 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát. 000132333 773__ $$g2023 (2023), 76-81$$pProc. IEEE/RSJ Int. Conf. Intell. Rob. Syst.$$tProceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems$$x2153-0858 000132333 8564_ $$s2381846$$uhttps://zaguan.unizar.es/record/132333/files/texto_completo.pdf$$yPostprint 000132333 8564_ $$s3295198$$uhttps://zaguan.unizar.es/record/132333/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint 000132333 909CO $$ooai:zaguan.unizar.es:132333$$particulos$$pdriver 000132333 951__ $$a2024-07-31-10:05:51 000132333 980__ $$aARTICLE