<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1111/cgf.14446</dc:identifier><dc:language>eng</dc:language><dc:creator>Delanoy, J.</dc:creator><dc:creator>Lagunas, M.</dc:creator><dc:creator>Condor, J.</dc:creator><dc:creator>Gutierrez, D.</dc:creator><dc:creator>Masia, B.</dc:creator><dc:title>A generative framework for image-based editing of material appearance using perceptual attributes</dc:title><dc:identifier>ART-2022-131513</dc:identifier><dc:description>Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study.</dc:description><dc:date>2022</dc:date><dc:source>http://zaguan.unizar.es/record/126906</dc:source><dc:doi>10.1111/cgf.14446</dc:doi><dc:identifier>http://zaguan.unizar.es/record/126906</dc:identifier><dc:identifier>oai:zaguan.unizar.es:126906</dc:identifier><dc:relation>info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/765121/EU/DyViTo: Dynamics in Vision and Touch - the look and feel of stuff/DyViTo</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 765121-DyViTo</dc:relation><dc:relation>info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</dc:relation><dc:relation>This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</dc:relation><dc:identifier.citation>Computer Graphics Forum 41, 1 (2022), 453-464</dc:identifier.citation><dc:rights>All rights reserved</dc:rights><dc:rights>http://www.europeana.eu/rights/rr-f/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>