000126906 001__ 126906
000126906 005__ 20240319081014.0
000126906 0247_ $$2doi$$a10.1111/cgf.14446
000126906 0248_ $$2sideral$$a131513
000126906 037__ $$aART-2022-131513
000126906 041__ $$aeng
000126906 100__ $$aDelanoy, J.
000126906 245__ $$aA generative framework for image-based editing of material appearance using perceptual attributes
000126906 260__ $$c2022
000126906 5060_ $$aAccess copy available to the general public$$fUnrestricted
000126906 5203_ $$aSingle-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study.
000126906 536__ $$9info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON$$9info:eu-repo/grantAgreement/EC/H2020/765121/EU/DyViTo: Dynamics in Vision and Touch - the look and feel of stuff/DyViTo$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 765121-DyViTo$$9info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME
000126906 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000126906 590__ $$a2.5$$b2022
000126906 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b52 / 108 = 0.481$$c2022$$dQ2$$eT2
000126906 592__ $$a0.95$$b2022
000126906 593__ $$aComputer Networks and Communications$$c2022$$dQ1
000126906 593__ $$aComputer Graphics and Computer-Aided Design$$c2022$$dQ1
000126906 594__ $$a5.3$$b2022
000126906 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000126906 700__ $$aLagunas, M.
000126906 700__ $$aCondor, J.
000126906 700__ $$0(orcid)0000-0002-7503-7022$$aGutierrez, D.$$uUniversidad de Zaragoza
000126906 700__ $$0(orcid)0000-0003-0060-7278$$aMasia, B.$$uUniversidad de Zaragoza
000126906 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000126906 773__ $$g41, 1 (2022), 453-464$$pComput. graph. forum$$tComputer Graphics Forum$$x0167-7055
000126906 8564_ $$s2240857$$uhttps://zaguan.unizar.es/record/126906/files/texto_completo.pdf$$yPostprint
000126906 8564_ $$s2174449$$uhttps://zaguan.unizar.es/record/126906/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000126906 909CO $$ooai:zaguan.unizar.es:126906$$particulos$$pdriver
000126906 951__ $$a2024-03-18-15:28:44
000126906 980__ $$aARTICLE