Robust control for the analysis and design of large-scale optimization algorithms
Abstract:
Most complicated optimization problems, in particular those involving a large number of variables, are solved in practice using iterative algorithms. The problem of selecting a suitable algorithm is currently more of an art than a science; a great deal of expertise is required to know which algorithms to try and how to properly tune them. Moreover, there are seldom performance guarantees. In this talk, I will show how the problem of algorithm selection can be approached using tools from robust control theory. By solving simple semidefinite programs, we can derive robust bounds on convergence rates for popular algorithms such as the gradient method, proximal methods, fast/accelerated methods, and operator-splitting methods such as ADMM. The bounds derived in this manner either match or improve upon the best known bounds from the literature. Finally, our framework can be used to search for algorithms that meet desired performance specifications, thus establishing a new and principled methodology for designing new algorithms.
Biography:Laurent Lessard is an Assistant Professor of Electrical and Computer Engineering at the UW Madison and faculty member of the Optimization Group at the Wisconsin Institute for Discovery. Laurent received the B.A.Sc. in Engineering Science from the University of Toronto, and received the M.S. and Ph.D. in Aeronautics and Astronautics at Stanford University. After completing his doctoral work, Laurent was an LCCC Postdoc at Lund University in Sweden, and a postdoctoral researcher at the University of California, Berkeley. His research interests include: decentralized control, robust control, optimization, and machine learning.