<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1109/LRA.2018.2860039</dc:identifier><dc:language>eng</dc:language><dc:creator>Bescós, Berta</dc:creator><dc:creator>Fácil, José María</dc:creator><dc:creator>Civera, Javier</dc:creator><dc:creator>Neira, José</dc:creator><dc:title>DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes</dc:title><dc:identifier>ART-2018-106907</dc:identifier><dc:description>The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this paper we present DynaSLAM, a visual SLAM system that, building on ORB- SLAM2 [1], adds the capabilities of dynamic object detection and background inpainting. DynaSLAM is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. We are capable of detecting the moving objects either by multi-view geometry, deep learning or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynaSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments</dc:description><dc:date>2018</dc:date><dc:source>http://zaguan.unizar.es/record/71183</dc:source><dc:doi>10.1109/LRA.2018.2860039</dc:doi><dc:identifier>http://zaguan.unizar.es/record/71183</dc:identifier><dc:identifier>oai:zaguan.unizar.es:71183</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/DGA/T04</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/BES-2016-077836</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/DPI2015-67275</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/DPI2015-68905-P</dc:relation><dc:identifier.citation>IEEE ROBOTICS AND AUTOMATION LETTERS 3, 4 (2018), 4076 - 4083</dc:identifier.citation><dc:rights>by</dc:rights><dc:rights>http://creativecommons.org/licenses/by/3.0/es/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>