000048675 001__ 48675
000048675 005__ 20200221144247.0
000048675 0247_ $$2doi$$a10.3390/s16040493
000048675 0248_ $$2sideral$$a94729
000048675 037__ $$aART-2016-94729
000048675 041__ $$aeng
000048675 100__ $$0(orcid)0000-0001-6738-3382$$aRituerto, A.
000048675 245__ $$aBuilding an enhanced vocabulary of the robot environment with a ceiling pointing camera
000048675 260__ $$c2016
000048675 5060_ $$aAccess copy available to the general public$$fUnrestricted
000048675 5203_ $$aMobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches frequently use quantized feature spaces, commonly known as Bag of Words (BoW) or vocabulary representations. A drawback using standard BoW approaches is that semantic information is not considered as a criteria to create the visual words. To solve this challenging task, this paper studies how to leverage the standard vocabulary construction process to obtain a more meaningful visual vocabulary of the robot work environment using image sequences. We take advantage of spatio-temporal constraints and prior knowledge about the position of the camera. The key contribution of our work is the definition of a new pipeline to create a model of the environment. This pipeline incorporates (1) tracking information to the process of vocabulary construction and (2) geometric cues to the appearance descriptors. Motivated by long term robotic applications, such as the aforementioned monitoring tasks, we focus on a configuration where the robot camera points to the ceiling, which captures more stable regions of the environment. The experimental validation shows how our vocabulary models the environment in more detail than standard vocabulary approaches, without loss of recognition performance. We show different robotic tasks that could benefit of the use of our visual vocabulary approach, such as place recognition or object discovery. For this validation, we use our publicly available data-set.
000048675 536__ $$9info:eu-repo/grantAgreement/ES/MINECO/DPI2015-65962-R
000048675 540__ $$9info:eu-repo/semantics/openAccess$$aby$$uhttp://creativecommons.org/licenses/by/3.0/es/
000048675 590__ $$a2.677$$b2016
000048675 591__ $$aINSTRUMENTS & INSTRUMENTATION$$b10 / 58 = 0.172$$c2016$$dQ1$$eT1
000048675 591__ $$aCHEMISTRY, ANALYTICAL$$b25 / 76 = 0.329$$c2016$$dQ2$$eT1
000048675 591__ $$aELECTROCHEMISTRY$$b12 / 29 = 0.414$$c2016$$dQ2$$eT2
000048675 592__ $$a0.623$$b2016
000048675 593__ $$aElectrical and Electronic Engineering$$c2016$$dQ1
000048675 593__ $$aAnalytical Chemistry$$c2016$$dQ2
000048675 593__ $$aAtomic and Molecular Physics, and Optics$$c2016$$dQ2
000048675 593__ $$aMedicine (miscellaneous)$$c2016$$dQ2
000048675 593__ $$aInstrumentation$$c2016$$dQ2
000048675 593__ $$aBiochemistry$$c2016$$dQ3
000048675 655_4 $$ainfo:eu-repo/semantics/article$$vinfo:eu-repo/semantics/publishedVersion
000048675 700__ $$aAndreasson, H.
000048675 700__ $$0(orcid)0000-0002-7580-9037$$aMurillo, A.C.$$uUniversidad de Zaragoza
000048675 700__ $$aLilienthal, A.
000048675 700__ $$0(orcid)0000-0001-5209-2267$$aGuerrero, J.J.$$uUniversidad de Zaragoza
000048675 7102_ $$15007$$2520$$aUniversidad de Zaragoza$$bDpto. Informát.Ingenie.Sistms.$$cÁrea Ingen.Sistemas y Automát.
000048675 773__ $$g16, 4 (2016), 493$$pSensors$$tSensors (Switzerland)$$x1424-8220
000048675 8564_ $$s1000031$$uhttps://zaguan.unizar.es/record/48675/files/texto_completo.pdf$$yVersión publicada
000048675 8564_ $$s103908$$uhttps://zaguan.unizar.es/record/48675/files/texto_completo.jpg?subformat=icon$$xicon$$yVersión publicada
000048675 909CO $$ooai:zaguan.unizar.es:48675$$particulos$$pdriver
000048675 951__ $$a2020-02-21-13:26:08
000048675 980__ $$aARTICLE