<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1016/j.cviu.2018.01.003</dc:identifier><dc:language>eng</dc:language><dc:creator>Bermúdez-Cameo, Jesús</dc:creator><dc:creator>López-Nicolás, Gonzalo</dc:creator><dc:creator>Guerrero, José J.</dc:creator><dc:title>Fitting line projections in non-central catadioptric cameras with revolution symmetry</dc:title><dc:identifier>ART-2018-104898</dc:identifier><dc:description>Line-images in non-central cameras contain much richer information of the original 3D line than line projections in central cameras. The projection surface of a 3D line in most catadioptric non-central cameras is a ruled surface, encapsulating the complete information of the 3D line. The resulting line-image is a curve which contains the 4 degrees of freedom of the 3D line. That means a qualitative advantage with respect to the central case, although extracting this curve is quite difficult. In this paper, we focus on the analytical description of the line-images in non-central catadioptric systems with symmetry of revolution. As a direct application we present a method for automatic line-image extraction for conical and spherical calibrated catadioptric cameras. For designing this method we have analytically solved the metric distance from point to line-image for non-central catadioptric systems. We also propose a distance we call effective baseline measuring the quality of the reconstruction of a 3D line from the minimum number of rays. This measure is used to evaluate the different random attempts of a robust scheme allowing to reduce the number of trials in the process. The proposal is tested and evaluated in simulations and with both synthetic and real images.</dc:description><dc:date>2018</dc:date><dc:source>http://zaguan.unizar.es/record/77264</dc:source><dc:doi>10.1016/j.cviu.2018.01.003</dc:doi><dc:identifier>http://zaguan.unizar.es/record/77264</dc:identifier><dc:identifier>oai:zaguan.unizar.es:77264</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/DPI2014-61792-EXP</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MINECO/DPI2015-65962-R</dc:relation><dc:identifier.citation>COMPUTER VISION AND IMAGE UNDERSTANDING 167 (2018), 134-152</dc:identifier.citation><dc:rights>by-nc-nd</dc:rights><dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>