000131604 001__ 131604
000131604 005__ 20240212142201.0
000131604 0247_ $$2doi$$a10.1109/ACCESS.2024.3357400
000131604 0248_ $$2sideral$$a136927
000131604 037__ $$aART-2024-136927
000131604 041__ $$aeng
000131604 100__ $$0(orcid)0000-0002-8949-2632$$aPérez-Yus, Alejandro$$uUniversidad de Zaragoza
000131604 245__ $$aRASPV: A robotics framework for augmented simulated prosthetic vision
000131604 260__ $$c2024
000131604 5060_ $$aAccess copy available to the general public$$fUnrestricted
000131604 5203_ $$aOne of the main challenges of visual prostheses is to augment the perceived information to improve the experience of its wearers. Given the limited access to implanted patients, in order to facilitate the experimentation of new techniques, this is often evaluated via Simulated Prosthetic Vision (SPV) with sighted people. In this work, we introduce a novel SPV framework and implementation that presents major advantages with respect to previous approaches. First, it is integrated into a robotics framework, which allows us to benefit from a wide range of methods and algorithms from the field (e.g. object recognition, obstacle avoidance, autonomous navigation, deep learning). Second, we go beyond traditional image processing with 3D point clouds processing using an RGB-D camera, allowing us to robustly detect the floor, obstacles and the structure of the scene. Third, it works either with a real camera or in a virtual environment, which gives us endless possibilities for immersive experimentation through a head-mounted display. Fourth, we incorporate a validated temporal phosphene model that replicates time effects into the generation of visual stimuli. Finally, we have proposed, developed and tested several applications within this framework, such as avoiding moving obstacles, providing a general understanding of the scene, staircase detection, helping the subject to navigate an unfamiliar space, and object and person detection. We provide experimental results in real and virtual environments. The code is publicly available at https://www.github.com/aperezyus/RASPV
000131604 536__ $$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/ES/UZ/JIUZ-2022-IAR-05
000131604 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc-nd$$uhttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
000131604 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000131604 700__ $$aSantos-Villafranca, María$$uUniversidad de Zaragoza
000131604 700__ $$aTomás-Barba, Julia
000131604 700__ $$0(orcid)0000-0002-8479-1748$$aBermúdez-Cameo, Jesús$$uUniversidad de Zaragoza
000131604 700__ $$aMontano-Oliván, Lorenzo
000131604 700__ $$0(orcid)0000-0001-9347-5969$$aLópez-Nicolás, Gonzalo$$uUniversidad de Zaragoza
000131604 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, José J.$$uUniversidad de Zaragoza
000131604 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000131604 773__ $$g12 (2024), 15251-15267$$pIEEE Access$$tIEEE Access$$x2169-3536
000131604 8564_ $$s9754975$$uhttps://zaguan.unizar.es/record/131604/files/texto_completo.pdf$$yVersión publicada
000131604 8564_ $$s2673904$$uhttps://zaguan.unizar.es/record/131604/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000131604 909CO $$ooai:zaguan.unizar.es:131604$$particulos$$pdriver
000131604 951__ $$a2024-02-12-13:57:38
000131604 980__ $$aARTICLE