LTH-image

Learning Regularizers from Data

Yong Sheng Soh, Caltech

Abstract:

Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness.  In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based approaches for solving inverse problems.  The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise.  For example, regularization is useful for promoting smoothness, sparsity, low energy, and large entropy in solutions to inverse problems in image analysis, statistical model selection, and the geosciences.

We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available; the objective is to identify a regularizer to promote the type of structure contained in the data.  The regularizers obtained using our framework are specified as convex functions that can be computed efficiently via semidefinite programming, and they can be employed in tractable convex optimization approaches for solving inverse problems.  Our approach for learning such semidefinite regularizers is based on computing certain structured factorizations of data matrices.  We propose a method for this task that combines recent techniques for rank minimization problems along with the Operator Sinkhorn iteration.  We discuss some of the theoretical properties of our algorithm as well as its utility in practice.
(Joint work with Venkat Chandrasekaran)

Slides