000130067 001__ 130067
000130067 005__ 20240122171020.0
000130067 0247_ $$2doi$$a10.1111/cgf.13813
000130067 0248_ $$2sideral$$a115903
000130067 037__ $$aART-2019-115903
000130067 041__ $$aeng
000130067 100__ $$aLiang, Y.
000130067 245__ $$aGeneric interactive pixel-level image editing
000130067 260__ $$c2019
000130067 5060_ $$aAccess copy available to the general public$$fUnrestricted
000130067 5203_ $$aSeveral image editing methods have been proposed in the past decades, achieving brilliant results. The most sophisticated of them, however, require additional information per-pixel. For instance, dehazing requires a specific transmittance value per pixel, or depth of field blurring requires depth or disparity values per pixel. This additional per-pixel value is obtained either through elaborated heuristics or through additional control over the capture hardware, which is very often tailored for the specific editing application. In contrast, however, we propose a generic editing paradigm that can become the base of several different applications. This paradigm generates both the needed per-pixel values and the resulting edit at interactive rates, with minimal user input that can be iteratively refined. Our key insight for getting per-pixel values at such speed is to cluster them into superpixels, but, instead of a constant value per superpixel (which yields accuracy problems), we have a mathematical expression for pixel values at each superpixel: in our case, an order two multinomial per superpixel. This leads to a linear least-squares system, effectively enabling specific per-pixel values at fast speeds. We illustrate this approach in three applications: depth of field blurring (from depth values), dehazing (from transmittance values) and tone mapping (from brightness and contrast local values), and our approach proves both favorably interactive and accurate in all three. Our technique is also evaluated with a common dataset and compared favorably.
000130067 536__ $$9info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON$$9info:eu-repo/grantAgreement/ES/MINECO/TIN2016-78753-P
000130067 540__ $$9info:eu-repo/semantics/openAccess$$aAll rights reserved$$uhttp://www.europeana.eu/rights/rr-f/
000130067 590__ $$a2.116$$b2019
000130067 591__ $$aCOMPUTER SCIENCE, SOFTWARE ENGINEERING$$b38 / 107 = 0.355$$c2019$$dQ2$$eT2
000130067 592__ $$a1.246$$b2019
000130067 593__ $$aComputer Networks and Communications$$c2019$$dQ1
000130067 593__ $$aComputer Graphics and Computer-Aided Design$$c2019$$dQ1
000130067 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000130067 700__ $$aGan, Y.
000130067 700__ $$aChen, M.
000130067 700__ $$0(orcid)0000-0002-7503-7022$$aGutierrez, D.$$uUniversidad de Zaragoza
000130067 700__ $$0(orcid)0000-0002-8160-7159$$aMuñoz, A.$$uUniversidad de Zaragoza
000130067 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf.
000130067 773__ $$g38, 7 (2019), 23-34$$pComput. graph. forum$$tComputer Graphics Forum$$x0167-7055
000130067 8564_ $$s928039$$uhttps://zaguan.unizar.es/record/130067/files/texto_completo.pdf$$yPostprint
000130067 8564_ $$s1368435$$uhttps://zaguan.unizar.es/record/130067/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000130067 909CO $$ooai:zaguan.unizar.es:130067$$particulos$$pdriver
000130067 951__ $$a2024-01-22-15:24:45
000130067 980__ $$aARTICLE