Resumen: Visual-Inertial Navigation Systems (VINS) is a significant undertaking in computer vision, robotics, and autonomous driving. Currently, point-line VINS have attracted significant attention due to their increased robustness and accuracy compared to point-only VINS. However, their effectiveness relies on the existence of clear line structures within the scene. Point-line VINS may become inaccurate or fail when scenes contain scattered lines or other features like arcs. Moreover, extracting and matching line features can bring computational overheads due to complex geometric models. In order to address VINS challenges without the overheads related to lines, we propose a novel approach, denoted as PE-VINS, which adds edge features to point-based VINS. Our proposed employs edge features in scenes to establish extra correspondences between views and then enhance its accuracy and robustness. Our method identifies edge features using image gradients and selects the most informative ones in the front end. We leverage sparse optical flow to track selected edge features and triangulate them using the initial pose predicted by the Inertial Measurement Unit (IMU). In the back end, we present a novel edge feature residual formulation that differs from the traditional reprojection residual. We tightly couple the new edge residual with the reprojection and IMU preintegration residual to better refine camera poses. We test our PE-VINS on public datasets, and our results show that it outperforms existing point-line-based methods and achieves state-of-the-art VINS performance. The code will be released at https://github.com/BlueAkoasm/PE-VINS . Idioma: Inglés DOI: 10.1109/TIV.2024.3418525 Año: 2024 Publicado en: IEEE transactions on intelligent vehicles (2024), 1-11 ISSN: 2379-8858 Tipo y forma: Artículo (PostPrint) Área (Departamento): Área Ingen.Sistemas y Automát. (Dpto. Informát.Ingenie.Sistms.)