000162691 001__ 162691
000162691 005__ 20251017144558.0
000162691 0247_ $$2doi$$a10.1007/s11548-025-03502-1
000162691 0248_ $$2sideral$$a145346
000162691 037__ $$aART-2025-145346
000162691 041__ $$aeng
000162691 100__ $$0(orcid)0000-0001-8191-6261$$aBarbed, O. León$$uUniversidad de Zaragoza
000162691 245__ $$aAutomated overview of complete endoscopies with unsupervised learned descriptors
000162691 260__ $$c2025
000162691 5060_ $$aAccess copy available to the general public$$fUnrestricted
000162691 5203_ $$aPurpose
We aim to automate the initial analysis of complete endoscopy videos, identifying the sparse relevant content. This facilitates long procedure recording understanding, reduces the clinicians’ review time, and facilitates downstream tasks such as video summarization, event detection, and 3D reconstruction.

Methods
Our approach extracts endoscopic video frame representations with a learned embedding model. These descriptors are clustered to find visual patterns in the procedure, identifying key scene types (surgery, clear visibility frames, etc.) and enabling segmentation into informative and non-informative video parts.

Results
Evaluation on complete colonoscopy videos presents good performance identifying surgery segments and different visibility conditions. The method produces structured overviews that separate useful segments from irrelevant ones. We illustrate its suitability and benefits as preprocessing for other downstream tasks, such as 3D reconstruction or video summarization.

Conclusion
Our approach enables automated endoscopy overview generation, helping the clinicians focus on the relevant video content such as good visibility sections and surgery actions. The presented work facilitates faster recording reviewing for clinicians and effective video preprocessing for downstream tasks.
000162691 536__ $$9info:eu-repo/grantAgreement/ES/DGA/T45-23R$$9info:eu-repo/grantAgreement/EC/H2020/863146/EU/EndoMapper: Real-time mapping from endoscopic video/EndoMapper$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 863146-EndoMapper$$9info:eu-repo/grantAgreement/ES/MICINN/PID2021-125514NB-I00
000162691 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttps://creativecommons.org/licenses/by/4.0/deed.es
000162691 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000162691 700__ $$0(orcid)0000-0002-3567-3294$$aAzagra, Pablo$$uUniversidad de Zaragoza
000162691 700__ $$aPlo, Juan
000162691 700__ $$0(orcid)0000-0002-7580-9037$$aMurillo, Ana C.$$uUniversidad de Zaragoza
000162691 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000162691 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000162691 773__ $$g(2025), [8 p.]$$tInternational Journal of Computer Assisted Radiology and Surgery$$x1861-6429
000162691 8564_ $$s2010924$$uhttps://zaguan.unizar.es/record/162691/files/texto_completo.pdf$$yVersión publicada
000162691 8564_ $$s2336502$$uhttps://zaguan.unizar.es/record/162691/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000162691 909CO $$ooai:zaguan.unizar.es:162691$$particulos$$pdriver
000162691 951__ $$a2025-10-17-14:13:42
000162691 980__ $$aARTICLE