000064538 001__ 64538
000064538 005__ 20180531095510.0
000064538 0248_ $$2sideral$$a104011
000064538 037__ $$aART-2017-104011
000064538 041__ $$aeng
000064538 100__ $$0(orcid)0000-0002-3567-3294$$aAzagra, Pablo$$uUniversidad de Zaragoza
000064538 245__ $$aA Multimodal Dataset for Object Model Learning from Natural Human-Robot Interaction
000064538 260__ $$c2017
000064538 5060_ $$aAccess copy available to the general public$$fUnrestricted
000064538 5203_ $$aLearning object models in the wild from natural human interactions is an essential ability for robots to per- form general tasks. In this paper we present a robocentric multimodal dataset addressing this key challenge. Our dataset focuses on interactions where the user teaches new objects to the robot in various ways. It contains synchronized recordings of visual (3 cameras) and audio data which provide a challenging evaluation framework for different tasks.
Additionally, we present an end-to-end system that learns object models using object patches extracted from the recorded natural interactions. Our proposed pipeline follows these steps: (a) recognizing the interaction type, (b) detecting the object that the interaction is focusing on, and (c) learning the models from the extracted data. Our main contribution lies in the steps towards identifying the target object patches of the images. We demonstrate the advantages of combining language and visual features for the interaction recognition and use multiple views to improve the object modelling.
Our experimental results show that our dataset is challenging due to occlusions and domain change with respect to typical object learning frameworks. The performance of common out- of-the-box classifiers trained on our data is low. We demonstrate that our algorithm outperforms such baselines.
000064538 536__ $$9info:eu-repo/grantAgreement/EUR/FP7/3rdHand-610878$$9info:eu-repo/grantAgreement/ES/MINECO/PCIN-2015-122
000064538 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000064538 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/acceptedVersion
000064538 700__ $$aGolemo, Florian
000064538 700__ $$aMollard, Yoan
000064538 700__ $$aLopes, Manuel
000064538 700__ $$0(orcid)0000-0003-1368-1151$$aCivera, Javier$$uUniversidad de Zaragoza
000064538 700__ $$0(orcid)0000-0002-7580-9037$$aMurillo Arnal, Ana Cristina$$uUniversidad de Zaragoza
000064538 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDepartamento de Informática e Ingeniería de Sistemas$$cIngeniería de Sistemas y Automática
000064538 773__ $$g(2017), [8 pp.]$$pProc. IEEE/RSJ Int. Conf. Intell. Rob. Syst.$$tProceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems$$x2153-0858
000064538 85641 $$uhttps://hal.inria.fr/hal-01567236$$zTexto completo de la revista
000064538 8564_ $$s8213971$$uhttps://zaguan.unizar.es/record/64538/files/texto_completo.pdf$$yPostprint
000064538 8564_ $$s134009$$uhttps://zaguan.unizar.es/record/64538/files/texto_completo.jpg?subformat=icon$$xicon$$yPostprint
000064538 909CO $$ooai:zaguan.unizar.es:64538$$particulos$$pdriver
000064538 951__ $$a2018-05-31-09:49:03
000064538 980__ $$aARTICLE