3:00pm - 5:00pmAlgebraic methods for convex sets
Chair(s): Rainer Sinn (Freie Universität Berlin, Germany), Greg Blekherman (Georgia Institute of Technology), Daniel Plaumann (Technische Universität Dortmund), Yong Sheng Soh (Institute of High Performance Computing, Singapore), Dogyoon Song (Massachusetts Institute of Technology)
Convex relaxations are extensively used to solve intractable optimization instances in a wide range of applications. For example, convex relaxations are prominently utilized to find solutions of combinatorial problems that are computationally hard. In addition, convexity-based regularization functions are employed in (potentially ill-posed) inverse problems, e.g., regression, to impose certain desirable structure on the solution.
In this mini-symposium, we discuss the use of convex relaxations and the study of convex sets from an algebraic perspective. In particular, the goal of this minisymposium is to bring together experts from algebraic geometry (real and classical), commutative algebra, optimization, statistics, functional analysis and control theory, as well as discrete geometry to discuss recent connections and discoveries at the interfaces of these fields.
(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)
Average-Case Algorithm Design Using Sum-of-Squares
Pravesh Kothari
Princeton University
Finding planted "signals" in random "noise" is a theme that captures problems arising in several different areas such as machine learning (compressed sensing, matrix completion, sparse principal component analysis, regression, recovering planted clusters), average-case complexity (stochastic block model, planted clique, random constraint satisfaction), and cryptography (attacking security of pseudorandom generators). For some of these problems (e.g. variants of compressed sensing and matrix completion), influential works in the past two decades identified the right convex relaxation and techniques for analyzing it that guarantee nearly optimal (w.r.t. to the information theoretic threshold) recovery guarantees. However these methods are problem specific and do not immediately generalize to other problems/variants.
Fitting Semidefinite-Representable Sets to Support Function Evaluations
Yong Sheng Soh
Institute of High Performance Computing, Singapore
The geometric problem of estimating an unknown convex set from its support function evaluations arises in a range of scientific and engineering applications. Traditional approaches typically rely on estimators that minimize the error over all possible compact convex sets. These methods, however, do not allow for the incorporation of prior structural information about the underlying set and the resulting estimates become increasingly more complicated to describe as the number of measurements available grows. We address these shortcomings by describing and analyzing a framework based on searching over structured families of convex sets that are specified as linear images of the free spectrahedron. Our results highlight the utility of our framework in settings where the number of measurements available is limited and where the underlying set to be reconstructed is non-polyhedral. A by-product of our framework that arises from taking the appropriate dual perspective is a numerical tool for computing the optimal approximation of a given convex set as a spectrahedra of fixed size.
Measuring Optimality Gap in Conic Programming Approximations with Gaussian Width
Dogyoon Song
Massachusetts Institute of Technology
It is a common practice to approximate hard optimization problems with simpler convex programs for the purpose of computational efficiency. However, this often introduces a nontrivial optimality gap between the true optimum and the approximate values. We evaluate the quality of approximations by studying the Gaussian width of the underlying convex cones as a generic measure to evaluate the optimality gap. Specifically, we consider two canonical examples: (a) approximation of the positive semidefinite (PSD) cone by the (scaled) diagonally dominant (DD) cone ($$DD^n, SDD^n$$); and (b) the sequence of hyperbolic cones, $$mathbb{R}^{n,(k)}$$, which are the derivative relaxations of the nonnegative orthant. We show that there is a significant gap between the width of PSD cone and (S)DD cone ($$Theta(n^2)$$ vs $$Theta(n log n)$$). On the other hand, (perhaps, surprisingly) the width of the hyperbolic cones remains almost invariant in the linear regime of relaxation ($$k = alpha n$$for $$alpha < 1$$).
False discovery and its control for low rank estimation
Armeen Taeb
California Institute of Technology
Cross-Validation (CV) is a commonly employed procedure that selects a model based on predictive evaluations. Despite its widespread use, empirical and theoretical studies have shown that CV produces overly complex models that consist of many false detections. As such, decades of research in statistics has lead to model selection techniques that assess the extent to which the estimated model signifies discoveries about an underlying phenomena. However, existing approaches rely on the discrete structure of the decision space and are not applicable in settings where the underlying model exhibits a more complicated structure such as low-rank estimation problems. We address this challenge via a geometric reformulation of the concept of true/false discovery, which then enables a natural definition in the low-rank case. We describe and analyze a generalization of the Stability Selection method of Meinshausen and Buehlmann to control for false discoveries in low-rank estimation, and we demonstrate its utility via numerical experiments. Concepts from algebraic geometry (e.g. tangent spaces to determinantal varieties) play a central role in the proposed framework.