<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1364/OL.465316</dc:identifier><dc:language>eng</dc:language><dc:creator>Royo, Diego</dc:creator><dc:creator>Huang, Zesheng</dc:creator><dc:creator>Liang, Yun</dc:creator><dc:creator>Song, Boyan</dc:creator><dc:creator>Muñoz, Adolfo</dc:creator><dc:creator>Gutierrez, Diego</dc:creator><dc:creator>Marco, Julio</dc:creator><dc:title>Structure-aware parametric representations for time-resolved light transport</dc:title><dc:identifier>ART-2022-131442</dc:identifier><dc:description>Time-resolved illumination provides rich spatiotemporal information for applications such as accurate depth sensing or hidden geometry reconstruction, becoming a useful asset for prototyping and as input for data-driven approaches. However, time-resolved illumination measurements are high-dimensional and have a low signal-to-noise ratio, hampering their applicability in real scenarios. We propose a novel method to compactly represent time-resolved illumination using mixtures of exponentially modified Gaussians that are robust to noise and preserve structural information. Our method yields representations two orders of magnitude smaller than discretized data, providing consistent results in such applications as hidden-scene reconstruction and depth estimation, and quantitative improvements over previous approaches.</dc:description><dc:date>2022</dc:date><dc:source>http://zaguan.unizar.es/record/127751</dc:source><dc:doi>10.1364/OL.465316</dc:doi><dc:identifier>http://zaguan.unizar.es/record/127751</dc:identifier><dc:identifier>oai:zaguan.unizar.es:127751</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/AEI/PID2019-105004GB-I00</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/DGA/LMP30-21</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</dc:relation><dc:identifier.citation>Optics Letters 47, 19 (2022), 5212-5215</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>