Low-Rank Inducing Norms with Optimality Interpretations
Christian Grussler, Lund University
Abstract:
This talk is on optimization problems which are convex apart from an upper bound on the cardinality/rank. These problems are often found in the context of compressed sensing, linear regression, matrix completion, low-rank approximation and many more. Since these problems are generally NP-hard, today, one of the most widely used convexifying methods for solving them is nuclear norm regularization. Despite the appealing probabilistic guarantees of this method, this approach often alters the cost function such that only sub-optimal solutions can be expected. In this talk, we will present an alternative by introducing the family of so-called low-rank inducing norms as convexifiers. Each norm is the convex envelope of a unitarily invariant norm plus a rank constraint. They have several interesting properties, which will be discussed throughout the talk.- Allow us to formulate a convexified problem which replaces parts of the cost, instead of adding to it.
- Give a simple deterministic test if the solution to the convexified problem is a solution to the non-convex problem.
- Often finds solutions where the nuclear norm fails to give low-rank solutions.
- Allow us to analyze the convergence of non-convex proximal splitting algorithms with convex analysis tools.
- Leads to a different interpretation of the nuclear norm than the one that is traditionally presented.
Biography:Christian Grussler is a postdoc at the Department of Automatic Control at Lund University, Sweden. His current research interests include low-rank/sparse optimization, model reduction, positive systems and system identification. He received a Dipl.-Math. techn. degree (Industrial Mathematics) from TU Kaiserslautern, Germany and an M.Sc. degree (Engineering Mathematics) from Lund University in 2011. In February 2017, he received a Ph.D. degree from Lund University under the supervision of Anders Rantzer, Pontus Giselsson and Andrey Ghulchak.