000135565 001__ 135565 000135565 005__ 20240605121015.0 000135565 0247_ $$2doi$$a10.1111/cgf.15037 000135565 0248_ $$2sideral$$a138724 000135565 037__ $$aART-2024-138724 000135565 041__ $$aeng 000135565 100__ $$0(orcid)0000-0002-2077-683X$$aGuerrero-Viu, Julia$$uUniversidad de Zaragoza 000135565 245__ $$aPredicting Perceived Gloss: Do Weak Labels Suffice? 000135565 260__ $$c2024 000135565 5060_ $$aAccess copy available to the general public$$fUnrestricted 000135565 5203_ $$aEstimating perceptual attributes of materials directly from images is a challenging task due to their complex, not fully‐understood interactions with external factors, such as geometry and lighting. Supervised deep learning models have recently been shown to outperform traditional approaches, but rely on large datasets of human‐annotated images for accurate perception predictions. Obtaining reliable annotations is a costly endeavor, aggravated by the limited ability of these models to generalise to different aspects of appearance. In this work, we show how a much smaller set of human annotations (“strong labels”) can be effectively augmented with automatically derived “weak labels” in the context of learning a low‐dimensional image‐computable gloss metric. We evaluate three alternative weak labels for predicting human gloss perception from limited annotated data. Incorporating weak labels enhances our gloss prediction beyond the current state of the art. Moreover, it enables a substantial reduction in human annotation costs without sacrificing accuracy, whether working with rendered images or real photographs. 000135565 536__ $$9info:eu-repo/grantAgreement/ES/DGA-CUS/702-2022$$9info:eu-repo/grantAgreement/EC/HORIZON EUROPE/101098225/EU/Seeing Stuff: Perceiving Materials and their Properties/STUFF$$9info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME$$9This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME$$9info:eu-repo/grantAgreement/ES/MCIU/FPU20-02340$$9info:eu-repo/grantAgreement/ES/MICINN/PID2022-141766OB-I00 000135565 540__ $$9info:eu-repo/semantics/openAccess$$aby-nc$$uhttp://creativecommons.org/licenses/by-nc/3.0/es/ 000135565 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion 000135565 700__ $$0(orcid)0000-0002-5480-7462$$aSubias, J. Daniel$$uUniversidad de Zaragoza 000135565 700__ $$0(orcid)0000-0002-7796-3177$$aSerrano, Ana$$uUniversidad de Zaragoza 000135565 700__ $$aStorrs, Katherine R. 000135565 700__ $$aFleming, Roland W. 000135565 700__ $$0(orcid)0000-0003-0060-7278$$aMasia, Belen$$uUniversidad de Zaragoza 000135565 700__ $$0(orcid)0000-0002-7503-7022$$aGutierrez, Diego$$uUniversidad de Zaragoza 000135565 7102_ $$15007$$2570$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Lenguajes y Sistemas Inf. 000135565 773__ $$g43, 2 (2024), e15037 [13 pp.]$$pComput. graph. forum$$tComputer Graphics Forum$$x0167-7055 000135565 8564_ $$s2894923$$uhttps://zaguan.unizar.es/record/135565/files/texto_completo.pdf$$yVersión publicada 000135565 8564_ $$s2162813$$uhttps://zaguan.unizar.es/record/135565/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada 000135565 909CO $$ooai:zaguan.unizar.es:135565$$particulos$$pdriver 000135565 951__ $$a2024-06-05-10:50:38 000135565 980__ $$aARTICLE