Resumen: Artificial Neural Networks are a Machine Learning algorithm based on the structure of biological neurons (these neurons are organized in layers). Deep Learning is the branch of Machine Learning that includes all the techniques used to build Deep Neural Networks (Artificial Neural Networks with at least two hidden layers) that are able to learn from data with several levels of abstraction. A Feed-forward Neural Network with fully-connected layers is a Deep Neural Network whose information flow goes forwards. The neurons that belong to consecutive layers are fully-connected. Its architecture is based on the weights of the connections between the neurons and on the bias that each neuron adds to its received information. The value of these parameters is fitted during training. This learning process is reduced to an optimization problem that can be solved using Gradient Descent or other recent algorithms as Scheduled Restart Stochastic Gradient Descent, being Back Propagation the algorithm used to compute the required derivatives. If the network is not able to learn correctly, overfitting or underfitting can arise. Other parameters of the neural network (hyperparameters) are not tuned during training. To perform, for example, image classification or prediction tasks we have to use other types of Deep Neural Networks as Convolutional Neural Networks or Recurrent Neural Networks.