Resumen: Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study. Idioma: Inglés DOI: 10.1111/cgf.14446 Año: 2022 Publicado en: Computer Graphics Forum 41, 1 (2022), 453-464 ISSN: 0167-7055 Factor impacto JCR: 2.5 (2022) Categ. JCR: COMPUTER SCIENCE, SOFTWARE ENGINEERING rank: 52 / 108 = 0.481 (2022) - Q2 - T2 Factor impacto CITESCORE: 5.3 - Computer Science (Q2)