<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
  <controlfield tag="001">101500</controlfield>
  <controlfield tag="005">20210507085643.0</controlfield>
  <datafield tag="024" ind1="7" ind2=" ">
    <subfield code="2">doi</subfield>
    <subfield code="a">10.1109/IROS.2018.8594185</subfield>
  </datafield>
  <datafield tag="024" ind1="8" ind2=" ">
    <subfield code="2">sideral</subfield>
    <subfield code="a">107640</subfield>
  </datafield>
  <datafield tag="037" ind1=" " ind2=" ">
    <subfield code="a">ART-2018-107640</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Alonso Ruiz, Iñigo</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
    <subfield code="0">(orcid)0000-0003-4638-4655</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Semantic Segmentation from Sparse Labeling Using Multi-Level Superpixels</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2018</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
    <subfield code="a">Semantic  segmentation  is  a  challenging  problemthat  can  benefit  numerous  robotics  applications,  since  it  pro-vides   information   about   the   content   at   every   image   pixel.Solutions  to  this  problem  have  recently  witnessed  a  boost  onperformance  and  results  thanks  to  deep  learning  approaches.Unfortunately,   common   deep   learning   models   for   semanticsegmentation  present  several  challenges  which  hinder  real  lifeapplicability  in  many  domains.  A  significant  challenge  is  theneed   of   pixel   level   labeling   on   large   amounts   of   trainingimages  to  be  able  to  train  those  models,  which  implies  avery  high  cost.  This  work  proposes  and  validates  a  simplebut  effective  approach  to  train  dense  semantic  segmentationmodels  from  sparsely  labeled  data.  Labeling  only  a  few  pixelsper  image  reduces  the  human  interaction  required.  We  findmany available datasets, e.g., environment monitoring data, thatprovide  this  kind  of  sparse  labeling.  Our  approach  is  basedon  augmenting  the  sparse  annotation  to  a  dense  one  with  theproposed  adaptive  superpixel  segmentation  propagation.  Weshow that this label augmentation enables effective learning ofstate-of-the-art  segmentation  models,  getting  similar  results  tothose models trained with dense ground-truth.</subfield>
  </datafield>
  <datafield tag="506" ind1="0" ind2=" ">
    <subfield code="a">Access copy available to the general public</subfield>
    <subfield code="f">Unrestricted</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="9">info:eu-repo/grantAgreement/ES/DGA/T45-17R</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/ES/MINECO-FEDER/DPI2015-69376-R</subfield>
    <subfield code="9">info:eu-repo/grantAgreement/ES/UZ/UZ2017-TEC-06</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="9">info:eu-repo/semantics/openAccess</subfield>
    <subfield code="a">by</subfield>
    <subfield code="u">http://creativecommons.org/licenses/by/3.0/es/</subfield>
  </datafield>
  <datafield tag="655" ind1=" " ind2="4">
    <subfield code="a">info:eu-repo/semantics/article</subfield>
    <subfield code="v">info:eu-repo/semantics/acceptedVersion</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Murillo Arnal, Ana Cristina</subfield>
    <subfield code="u">Universidad de Zaragoza</subfield>
    <subfield code="0">(orcid)0000-0002-7580-9037</subfield>
  </datafield>
  <datafield tag="710" ind1="2" ind2=" ">
    <subfield code="1">5007</subfield>
    <subfield code="2">520</subfield>
    <subfield code="a">Universidad de Zaragoza</subfield>
    <subfield code="b">Dpto. Informát.Ingenie.Sistms.</subfield>
    <subfield code="c">Área Ingen.Sistemas y Automát.</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="g">2018, 18401073 (2018), 5785-5792</subfield>
    <subfield code="p">Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst.</subfield>
    <subfield code="t">Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems</subfield>
    <subfield code="x">2153-0858</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">3286918</subfield>
    <subfield code="u">http://zaguan.unizar.es/record/101500/files/texto_completo.pdf</subfield>
    <subfield code="y">Postprint</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">3109547</subfield>
    <subfield code="u">http://zaguan.unizar.es/record/101500/files/texto_completo.jpg?subformat=icon</subfield>
    <subfield code="x">icon</subfield>
    <subfield code="y">Postprint</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="o">oai:zaguan.unizar.es:101500</subfield>
    <subfield code="p">articulos</subfield>
    <subfield code="p">driver</subfield>
  </datafield>
  <datafield tag="951" ind1=" " ind2=" ">
    <subfield code="a">2021-05-07-08:04:19</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">ARTICLE</subfield>
  </datafield>
</record>
</collection>