Integration of causal inference in the DQN sampling process for classical control problems
Resumen: In this study, causal inference is integrated into deep reinforcement learning to enhance sampling in classical control environments. The problem we’re working on is "classical control," where an agent makes decisions to keep systems balanced. With the help of artificial intelligence and causal inference, we have developed a method that adjusts a deep Q-network’s experience memory by adjusting the priority of transitions. According to the agent’s actions, these priorities are based on the magnitude of causal differences. We have applied our methodology to a reference environment in reinforcement learning. In comparison with a deep Q-network based on conventional random sampling, the results indicate significant improvements in performance and learning efficiency. Our study shows that causal inference can be integrated into the sampling process so that experience transitions can be selected more intelligently, resulting in more effective learning for classical control problems. The study contributes to the convergence between artificial intelligence and causal inference, offering new perspectives for the application of reinforcement learning techniques in real-world applications where precise control is essential.
Idioma: Inglés
DOI: 10.1007/s00521-024-10540-4
Año: 2024
Publicado en: Neural Computing and Applications (2024), [13 pp.]
ISSN: 0941-0643

Tipo y forma: Artículo (Versión definitiva)
Área (Departamento): Área Lenguajes y Sistemas Inf. (Dpto. Informát.Ingenie.Sistms.)

Derechos Reservados Derechos reservados por el editor de la revista


Exportado de SIDERAL (2025-02-14-14:11:33)


Visitas y descargas

Este artículo se encuentra en las siguientes colecciones:
Artículos > Artículos por área > Lenguajes y Sistemas Informáticos



 Registro creado el 2025-02-10, última modificación el 2025-02-14


Versión publicada:
 PDF
Valore este documento:

Rate this document:
1
2
3
 
(Sin ninguna reseña)