<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1016/j.media.2024.103195</dc:identifier><dc:language>eng</dc:language><dc:creator>Rau, Anita</dc:creator><dc:creator>Bano, Sophia</dc:creator><dc:creator>Jin, Yueming</dc:creator><dc:creator>Azagra, Pablo</dc:creator><dc:creator>Morlana, Javier</dc:creator><dc:creator>Kader, Rawen</dc:creator><dc:creator>Sanderson, Edward</dc:creator><dc:creator>Matuszewski, Bogdan J.</dc:creator><dc:creator>Lee, Jae Young</dc:creator><dc:creator>Lee, Dong-Jae</dc:creator><dc:creator>Posner, Erez</dc:creator><dc:creator>Frank, Netanel</dc:creator><dc:creator>Elangovan, Varshini</dc:creator><dc:creator>Raviteja, Sista</dc:creator><dc:creator>Li, Zhengwen</dc:creator><dc:creator>Liu, Jiquan</dc:creator><dc:creator>Lalithkumar, Seenivasan</dc:creator><dc:creator>Islam, Mobarakol</dc:creator><dc:creator>Ren, Hongliang</dc:creator><dc:creator>Lovat, Laurence B.</dc:creator><dc:creator>Montiel, José M. M.</dc:creator><dc:creator>Stoyanov, Danail</dc:creator><dc:title>SimCol3D — 3D reconstruction during colonoscopy challenge</dc:title><dc:identifier>ART-2024-138764</dc:identifier><dc:description>Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.</dc:description><dc:date>2024</dc:date><dc:source>http://zaguan.unizar.es/record/135746</dc:source><dc:doi>10.1016/j.media.2024.103195</dc:doi><dc:identifier>http://zaguan.unizar.es/record/135746</dc:identifier><dc:identifier>oai:zaguan.unizar.es:135746</dc:identifier><dc:relation>info:eu-repo/grantAgreement/EC/H2020/863146/EU/EndoMapper: Real-time mapping from endoscopic video/EndoMapper</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 863146-EndoMapper</dc:relation><dc:identifier.citation>MEDICAL IMAGE ANALYSIS 96 (2024), 103195 [16 pp.]</dc:identifier.citation><dc:rights>by</dc:rights><dc:rights>https://creativecommons.org/licenses/by/4.0/deed.es</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>