<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1007/s11548-025-03502-1</dc:identifier><dc:language>eng</dc:language><dc:creator>Barbed, O. León</dc:creator><dc:creator>Azagra, Pablo</dc:creator><dc:creator>Plo, Juan</dc:creator><dc:creator>Murillo, Ana C.</dc:creator><dc:title>Automated overview of complete endoscopies with unsupervised learned descriptors</dc:title><dc:identifier>ART-2025-145346</dc:identifier><dc:description>Purpose
We aim to automate the initial analysis of complete endoscopy videos, identifying the sparse relevant content. This facilitates long procedure recording understanding, reduces the clinicians’ review time, and facilitates downstream tasks such as video summarization, event detection, and 3D reconstruction.

Methods
Our approach extracts endoscopic video frame representations with a learned embedding model. These descriptors are clustered to find visual patterns in the procedure, identifying key scene types (surgery, clear visibility frames, etc.) and enabling segmentation into informative and non-informative video parts.

Results
Evaluation on complete colonoscopy videos presents good performance identifying surgery segments and different visibility conditions. The method produces structured overviews that separate useful segments from irrelevant ones. We illustrate its suitability and benefits as preprocessing for other downstream tasks, such as 3D reconstruction or video summarization.

Conclusion
Our approach enables automated endoscopy overview generation, helping the clinicians focus on the relevant video content such as good visibility sections and surgery actions. The presented work facilitates faster recording reviewing for clinicians and effective video preprocessing for downstream tasks.</dc:description><dc:date>2025</dc:date><dc:source>http://zaguan.unizar.es/record/162691</dc:source><dc:doi>10.1007/s11548-025-03502-1</dc:doi><dc:identifier>http://zaguan.unizar.es/record/162691</dc:identifier><dc:identifier>oai:zaguan.unizar.es:162691</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/DGA/T45-23R</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/863146/EU/EndoMapper: Real-time mapping from endoscopic video/EndoMapper</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 863146-EndoMapper</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MICINN/PID2021-125514NB-I00</dc:relation><dc:identifier.citation>International Journal of Computer Assisted Radiology and Surgery (2025), [8 p.]</dc:identifier.citation><dc:rights>by</dc:rights><dc:rights>https://creativecommons.org/licenses/by/4.0/deed.es</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>