LTH-image

Convex Relaxations in Optimization-Based Identification of Robust Nonlinear Dynamical Models

Alexander Megretski, LIDS, EECS, MIT

Abstract:

Converting numerical data originating from either physical measurements or computer simulations to compact mathematical models is a common task in engineering. The case of dynamical system identification presents additional challenges, in particular those associated with the need to prevent accumulation of small instantaneous equation errors into large output data matching errors, and the corresponding desire to ensure some kind of stability in the resulting models. Within these constraints, straightforward optimal fitting of stable dynamical models to data is a "hard" non-convex optimization task with multiple local minimums.

This talk presents a set of tools for designing system identification algorithms relying on special upper bounds for the output error which "relax" non-convex output error minimization tasks into convex optimization formats, with guaranteed stability and robustness of the resulting models. The approach utilizes the standard techniques of nonlinear and robust control, such as contraction metrics, localized dissipation inequalities, and "natural" storage functions. It also explores rational and algebraic dependence of models on the optimized parameters, thus departing from the traditional practice of linear parametrization.

The talk will also address implementation issues, as well as applications in nonlinear circuit and live neuron modeling. A discussion of strengths and weaknesses of the current versions of the algorithms will follow.

Slides (pdf, 1.5M)