Practical Bayesian optimization in the presence of outliers
Resumen: Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.
Idioma: Inglés
Año: 2018
Publicado en: Proceedings of Machine Learning Research 84 (AISTATS) (2018), 1722-1731
ISSN: 2640-3498

Originalmente disponible en: Texto completo de la revista

Tipo y forma: Article (PrePrint)

Rights Reserved All rights reserved by journal editor


Exportado de SIDERAL (2019-01-30-10:09:42)


Visitas y descargas

Este artículo se encuentra en las siguientes colecciones:
Articles



 Record created 2019-01-30, last modified 2019-01-30


Preprint:
 PDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)