Resumen: Deep learning has revolutionized the field of machine learning, outperforming traditional methods in a wide range of areas such as computer vision, robotics or natural language processing. However, deep learning models are often treated as a black box, providing high accuracy but with little insight into how the model arrived at its predictions. This lack of transparency can be a major drawback in certain applications where the ability to understand and trust the model’s predictions is crucial. For instance, in some tasks such as medical diagnosis or autonomous driving, it is as important to achieve high accuracy as it is to have an accurate uncertainty estimation. In these scenarios, a well-calibrated uncertainty estimate can provide valuable information for decision-making and can prevent risky situations. Bayesian deep learning and specifically Bayesian neural networks try to address the challenge of estimating uncertainty assuming that their parameters and predictions follow a specific distribution instead of deterministic values. This enables the use of Bayesian inference, a statistical technique based on the Bayes rule, that allows for updating our distributions of interest with the arrival of new observations. However, performing Bayesian inference in deep learning algorithms is not easy due to the complexity, dimensionality of the models and the large amounts of data they are trained on. This has led to the development of approximate Bayesian inference methods such as Ensembles and Monte Carlo dropout, which provide approximate solutions to the problem of uncertainty estimation in deep learning. Even so, these methods do not fully solve the overconfidence problem in deep learning models. This has motivated to reconsider in the last years a statistical technique called Laplace approximation which was introduced in 1992 by David Mackay.The focus of this Master’s thesis is to study the opportunities of Laplace approximation in Bayesian deep learning. We conduct a thorough investigation and analysis of this powerful technique that allows us to tackle the overconfidence problem. Additionally, we evaluate its performance on benchmark datasets by comparing it with the most commonly used uncertainty estimation methods such as deep ensembles and Monte Carlo dropout. We demonstrate the effectiveness of Laplace approximation in providing accurate and reliable uncertainty estimates for deep learning models. Specifically, our experiments show that Laplace approximation outperforms the other techniques in estimating uncertainty in out-of-distribution regions. As an example of real application, this work explores the use of Laplace approximation in the field of reinforcement learning (RL). While Laplace approximation has been widely used in other fields, its potential benefits in RL have yet to be fully explored. Exploration is a crucial problem in RL that involves exploring the environment to maximize an agent’s knowledge and, consequently, its ability to solve different tasks. Nevertheless, efficient exploration in high-dimensional environments remains an unsolved challenge. A promising approach is to use the degree of novelty as a reward signal. To address this, we propose integrating Laplace approximation into a model-based RL algorithm that performs active exploration by utilizing estimated uncertainty as novelty measure. Our experiments show that this approach outperforms the baselines.