<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
  <controlfield tag="001">100710</controlfield>
  <controlfield tag="005">20230519145407.0</controlfield>
  <datafield tag="024" ind1="7" ind2=" ">
    <subfield code="2">doi</subfield>
    <subfield code="a">10.1167/jov.21.2.2</subfield>
  </datafield>
  <datafield tag="024" ind1="8" ind2=" ">
    <subfield code="2">sideral</subfield>
    <subfield code="a">122751</subfield>
  </datafield>
  <datafield tag="037" ind1=" " ind2=" ">
    <subfield code="a">ART-2021-122751</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Lagunas, Manuel</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">The joint role of geometry and illumination on material recognition</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2021</subfield>
  </datafield>
  <datafield tag="506" ind1="0" ind2=" ">
    <subfield code="a">Access copy available to the general public</subfield>
    <subfield code="f">Unrestricted</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
    <subfield code="a">Observing and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects that surround us and recognizing the materials they are made of. Nevertheless, understanding the underlying perceptual processes that take place to accurately discern the visual properties of an object is a long-standing problem. In this work, we perform a comprehensive and systematic analysis of how the interplay of geometry, illumination, and their spatial frequencies affects human performance on material recognition tasks. We carry out large-scale behavioral experiments where participants are asked to recognize different reference materials among a pool of candidate samples. In the different experiments, we carefully sample the information in the frequency domain of the stimuli. From our analysis, we find significant first-order interactions between the geometry and the illumination, of both the reference and the candidates. In addition, we observe that simple image statistics and higher-order image histograms do not correlate with human performance. Therefore, we perform a high-level comparison of highly nonlinear statistics by training a deep neural network on material recognition tasks. Our results show that such models can accurately classify materials, which suggests that they are capable of defining a meaningful representation of material appearance from labeled proximal image data. Last, we find preliminary evidence that these highly nonlinear models and humans may use similar high-level factors for material recognition tasks.</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="9">info:eu-repo/grantAgreement/EC/H2020/682080/EU/Intuitive editing of visual appearance from real-world datasets/CHAMELEON</subfield>
    <subfield code="9">This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 682080-CHAMELEON</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/EC/H2020/765121/EU/DyViTo: Dynamics in Vision and Touch - the look and feel of stuff/DyViTo</subfield>
    <subfield code="9">This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 765121-DyViTo</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/EC/H2020/956585/EU/Predictive Rendering In Manufacture and Engineering/PRIME</subfield>
    <subfield code="9">This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No H2020 956585-PRIME</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/ES/MINECO/PID2019-105004GB-I00</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/ES/MINECO/TIN2016-78753-P</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="9">info:eu-repo/semantics/openAccess</subfield>
    <subfield code="a">by-nc-nd</subfield>
    <subfield code="u">http://creativecommons.org/licenses/by-nc-nd/3.0/es/</subfield>
  </datafield>
  <datafield tag="590" ind1=" " ind2=" ">
    <subfield code="a">2.004</subfield>
    <subfield code="b">2021</subfield>
  </datafield>
  <datafield tag="591" ind1=" " ind2=" ">
    <subfield code="a">OPHTHALMOLOGY</subfield>
    <subfield code="b">44 / 62 = 0.71</subfield>
    <subfield code="c">2021</subfield>
    <subfield code="d">Q3</subfield>
    <subfield code="e">T3</subfield>
  </datafield>
  <datafield tag="592" ind1=" " ind2=" ">
    <subfield code="a">0.79</subfield>
    <subfield code="b">2021</subfield>
  </datafield>
  <datafield tag="593" ind1=" " ind2=" ">
    <subfield code="a">Sensory Systems</subfield>
    <subfield code="c">2021</subfield>
    <subfield code="d">Q2</subfield>
  </datafield>
  <datafield tag="593" ind1=" " ind2=" ">
    <subfield code="a">Ophthalmology</subfield>
    <subfield code="c">2021</subfield>
    <subfield code="d">Q2</subfield>
  </datafield>
  <datafield tag="594" ind1=" " ind2=" ">
    <subfield code="a">3.2</subfield>
    <subfield code="b">2021</subfield>
  </datafield>
  <datafield tag="655" ind1=" " ind2="4">
    <subfield code="a">info:eu-repo/semantics/article</subfield>
    <subfield code="v">info:eu-repo/semantics/publishedVersion</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Serrano, Ana</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
    <subfield code="0">(orcid)0000-0002-7796-3177</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Gutiérrez, Diego</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
    <subfield code="0">(orcid)0000-0002-7503-7022</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Masiá, Belén</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
    <subfield code="0">(orcid)0000-0003-0060-7278</subfield>
  </datafield>
  <datafield tag="710" ind1="2" ind2=" ">
    <subfield code="1">5007</subfield>
    <subfield code="2">570</subfield>
    <subfield code="a">Universidad de Zaragoza</subfield>
    <subfield code="b">Dpto. Informát.Ingenie.Sistms.</subfield>
    <subfield code="c">Área Lenguajes y Sistemas Inf.</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="g">21, 2 (2021), 2[18 pp.]</subfield>
    <subfield code="p">J. Vision</subfield>
    <subfield code="t">Journal of Vision</subfield>
    <subfield code="x">1534-7362</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">2670449</subfield>
    <subfield code="u">http://zaguan.unizar.es/record/100710/files/texto_completo.pdf</subfield>
    <subfield code="y">Versión publicada</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">3236255</subfield>
    <subfield code="u">http://zaguan.unizar.es/record/100710/files/texto_completo.jpg?subformat=icon</subfield>
    <subfield code="x">icon</subfield>
    <subfield code="y">Versión publicada</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="o">oai:zaguan.unizar.es:100710</subfield>
    <subfield code="p">articulos</subfield>
    <subfield code="p">driver</subfield>
  </datafield>
  <datafield tag="951" ind1=" " ind2=" ">
    <subfield code="a">2023-05-18-13:48:35</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">ARTICLE</subfield>
  </datafield>
</record>
</collection>