Resumen: Recovering 3D information of intra-operative endoscopic images together with the relative endoscope camera position are fundamental blocks towards accurate guidance and navigation in image-guided surgery. They allow augmented reality overlay of pre-operative models, which are readily available from different imaging modalities. This thesis provides a systematic approach for estimating these two pieces of information based on a pure vision Simultaneous Localization And Mapping (SLAM). SLAM goal is localizing a camera sensor, in real-time, within a map (3D reconstruction) of the environment that is also built online. It enables markerless camera tracking, where it uses only information from RGB images of a standard monocular camera. The preliminary work in this thesis has presented a sparse SLAM solution for real time and accurate intra-operative visualization of patient's pre-operative models over the patient skin. We proposed a non-invasive registration and visualization pipeline that requires minimal interactions from medical staff and runs solely on a commodity Tablet-PC with a build-in camera. Subsequently, we directed our focus to endoscopy, which is very challenging for monocular 3D reconstruction and endoscope camera tracking. We have addressed the utilization of the state-of-the-art sparse SLAM, and achieved a remarkable tracking performance. Thus, it was our second contribution to propose a pairwise dense reconstruction algorithm that exploits the initial SLAM exploration phase and accurately provides a pairwise dense reconstruction of the surgical scene. A further contribution is an extension of state-of-the-art sparse SLAM with a novel dense multi-view stereo-like approach to perform live dense reconstructions and hence eliminates the wait for the abdominal cavity exploration. We decouple the dense reconstruction from the camera trajectory estimation, resulting in a system that combines the accuracy and robustness of feature-based SLAM with the more complete reconstruction of direct SLAM methods. The proposed system can cope with challenging lighting conditions and poor/repetitive textures in endoscopy at an affordable time budget using modern GPU. The proposed system has been validated and evaluated on real porcine sequences of abdominal cavity exploration and showed a superior performance to other dense SLAM methods in terms of accuracy, density, and computation times. It has been also tested on different in-door sequences and showed a promising reconstructions results. The proposed solutions in this thesis have been validated on real porcine in-vivo and ex-vivo sequences from different datasets and have proved to be fast and do not need any external tracking hardware nor significant intervention from medical staff, other than moving the Tablet-PC or the endoscope. They therefore can be integrated easily into the current surgical workflow.