LTH-image

Assembling stochastic quasi-Newton algorithms using Gaussian processes

Thomas Schön, Uppsala University, Sweden

Abstract:

In this talk I will focus on one of our recent developments where we show how the Gaussian process (GP) can be used to solve stochastic optimization problems. Our main motivation for studying these problems is that they arise when we are estimating unknown parameters in nonlinear state space models using sequential Monte Carlo (SMC). The very nature of this problem is such that we can only access the cost function (in this case the likelihood function) and its derivative via noisy observations, since there are no closed-form expressions available. Via SMC methods we can obtain unbiased estimates of the likelihood function. However, our development is fully general and hence applicable to any stochastic optimization problem. We start from the fact that many of the existing quasi-Newton algorithms can be formulated as learning algorithms, capable of learning local models of the cost functions. Inspired by this we can start assembling new stochastic quasi-Newton-type algorithms, applicable in situations where we only have access to noisy observations of the cost function and its derivatives. We will show how we can make use of the GP model to learn the Hessian allowing for efficient solution of these stochastic optimization problems. Additional motivation for studying the stochastic optimization problem stems from the fact that it arise in almost all large-scale supervised machine learning problems, not least in deep learning. I will very briefly mention some ongoing work where we have removed the GP representation and scale our ideas to much higher dimensions (both in terms of the size of the dataset and the number of unknown parameters).

Presentation slides