000132128 001__ 132128
000132128 005__ 20240731103339.0
000132128 0247_ $$2doi$$a10.1109/ICRA48891.2023.10160606
000132128 0248_ $$2sideral$$a137285
000132128 037__ $$aART-2023-137285
000132128 041__ $$aeng
000132128 100__ $$aMur-Labadia, Lorenzo$$uUniversidad de Zaragoza
000132128 245__ $$aBayesian deep learning for affordance segmentation in images
000132128 260__ $$c2023
000132128 5060_ $$aAccess copy available to the general public$$fUnrestricted
000132128 5203_ $$aAffordances are a fundamental concept in robotics since they relate available actions for an agent depending on its sensory-motor capabilities and the environment. We present a novel Bayesian deep network to detect affordances in images, at the same time that we quantify the distribution of the aleatoric and epistemic variance at the spatial level. We adapt the Mask-RCNN architecture to learn a probabilistic representation using Monte Carlo dropout. Our results outperform the state-of-the-art of deterministic networks. We attribute this improvement to a better probabilistic feature space representation on the encoder and the Bayesian variability induced at the mask generation, which adapts better to the object contours. We also introduce the new Probability-based Mask Quality measure that reveals the semantic and spatial differences on a probabilistic instance segmentation model. We modify the existing Probabilistic Detection Quality metric by comparing the binary masks rather than the predicted bounding boxes, achieving a finer-grained evaluation of the probabilistic segmentation. We find aleatoric variance in the contours of the objects due to the camera noise, while epistemic variance appears in visual challenging pixels.
000132128 536__ $$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-129410B-I00
000132128 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000132128 592__ $$a1.62$$b2023
000132128 593__ $$aArtificial Intelligence$$c2023
000132128 593__ $$aSoftware$$c2023
000132128 593__ $$aElectrical and Electronic Engineering$$c2023
000132128 593__ $$aControl and Systems Engineering$$c2023
000132128 594__ $$a6.8$$b2023
000132128 655_4 $$ainfo:eu-repo/semantics/conferenceObject$$vinfo:eu-repo/semantics/acceptedVersion
000132128 700__ $$0(orcid)0000-0002-6741-844X$$aMartinez-Cantin, Rubén$$uUniversidad de Zaragoza
000132128 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, José J.$$uUniversidad de Zaragoza
000132128 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000132128 773__ $$g2023 (2023), 6981-6987$$pProc. - IEEE Int. Conf. Robot. Autom.$$tProceedings - IEEE International Conference on Robotics and Automation$$x1050-4729
000132128 8564_ $$s1425642$$uhttps://zaguan.unizar.es/record/132128/files/texto_completo.pdf$$yPostprint$$zinfo:eu-repo/semantics/openAccess
000132128 8564_ $$s3128617$$uhttps://zaguan.unizar.es/record/132128/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint$$zinfo:eu-repo/semantics/openAccess
000132128 909CO $$ooai:zaguan.unizar.es:132128$$particulos$$pdriver
000132128 951__ $$a2024-07-31-09:49:35
000132128 980__ $$aARTICLE