000132335 001__ 132335
000132335 005__ 20240319081032.0
000132335 0247_ $$2doi$$a10.1109/ICCV51070.2023.01945
000132335 0248_ $$2sideral$$a137284
000132335 037__ $$aART-2024-137284
000132335 041__ $$aeng
000132335 100__ $$aRodríguez-Puigvert, Javier$$uUniversidad de Zaragoza
000132335 245__ $$aLightDepth: Single-view depth self-supervision from illumination decline
000132335 260__ $$c2024
000132335 5060_ $$aAccess copy available to the general public$$fUnrestricted
000132335 5203_ $$aSingle-view depth estimation can be remarkably effective if there is enough ground-truth depth data for supervised training. However, there are scenarios, especially in medicine in the case of endoscopies, where such data cannot be obtained. In such cases, multi-view self-supervision and synthetic-to-real transfer serve as alternative approaches, however, with a considerable performance reduction in comparison to supervised case. Instead, we propose a single-view self-supervised method that achieves a performance similar to the supervised case. In some medical devices, such as endoscopes, the camera and light sources are co-located at a small distance from the target surfaces. Thus, we can exploit that, for any given albedo and surface orientation, pixel brightness is inversely proportional to the square of the distance to the surface, providing a strong single-view self-supervisory signal. In our experiments, our self-supervised models deliver accuracies comparable to those of fully supervised ones, while being applicable without depth ground-truth data.
000132335 536__ $$9info:eu-repo/grantAgreement/ES/DGA/T45-23R$$9info:eu-repo/grantAgreement/EC/H2020/863146/EU/EndoMapper: Real-time mapping from endoscopic video/EndoMapper$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 863146-EndoMapper$$9info:eu-repo/grantAgreement/ES/MCIU-AEI-FEDER/PGC2018-096367-B-I00$$9info:eu-repo/grantAgreement/ES/MCIU/FPU20-0678$$9info:eu-repo/grantAgreement/ES/MICINN-AEI/PID2021-125209OB-I00$$9info:eu-repo/grantAgreement/ES/MICINN/PID2021-127685NB-I00$$9info:eu-repo/grantAgreement/EUR/MICINN/TED2021-131150B-I00
000132335 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000132335 655_4 $$ainfo:eu-repo/semantics/conferenceObject$$vinfo:eu-repo/semantics/acceptedVersion
000132335 700__ $$0(orcid)0000-0002-6837-934X$$aBatlle, Víctor M.$$uUniversidad de Zaragoza
000132335 700__ $$0(orcid)0000-0002-3627-7306$$aMartínez Montiel, J.M.$$uUniversidad de Zaragoza
000132335 700__ $$0(orcid)0000-0002-6741-844X$$aMartinez-Cantin, Ruben$$uUniversidad de Zaragoza
000132335 700__ $$aFua, Pascal
000132335 700__ $$0(orcid)0000-0002-4518-5876$$aTardós, Juan D.$$uUniversidad de Zaragoza
000132335 700__ $$0(orcid)0000-0003-1368-1151$$aCivera, Javier$$uUniversidad de Zaragoza
000132335 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000132335 773__ $$g2023 (2024), 21216-21226$$pProceedings (IEEE International Conference on Computer Vision)$$tProceedings (IEEE International Conference on Computer Vision)$$x1550-5499
000132335 8564_ $$s2029649$$uhttps://zaguan.unizar.es/record/132335/files/texto_completo.pdf$$yPostprint$$zinfo:eu-repo/semantics/openAccess
000132335 8564_ $$s2509136$$uhttps://zaguan.unizar.es/record/132335/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint$$zinfo:eu-repo/semantics/openAccess
000132335 909CO $$ooai:zaguan.unizar.es:132335$$particulos$$pdriver
000132335 951__ $$a2024-03-18-17:10:55
000132335 980__ $$aARTICLE