3:00pm - 5:00pmAlgebraic and geometric methods in optimization.
Chair(s): Jesus A. De Loera (University of California, Davis, United States of America), Rekha Thomas (University of Washington)
Recently advanced techniques from algebra and geometry have been used to prove remarkable results in Optimization. Some examples of the techniques used are polynomial algebra for non-convex polynomial optimization problems, combinatorial tools like Helly's theorem from combinatorial geometry to analyze and solve stochastic programs through sampling, and using ideal bases to find optimality certificates. Test-set augmentation algorithms for integer programming involving Graver sets for block-structured integer programs, come from concepts in commutative algebra. In this sessions experts will present a wide range of results that illustrate the power of the above mentioned methods and their connections to applied algebra and geometry.
(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)
Convergence analysis of measure-based bounds for polynomial optimization on the box, ball and sphere
Monique Laurent
CWI, Netherlands
We investigate the convergence rate of a hierarchy of measure-based upper bounds introduced by Lasserre (2011) for the minimization of a polynomial f over a compact set K. These bounds are obtained by searching for a degree 2r sum-of-squares density function h minimizing the expected value of f over K with respect to a given reference measure supported by K.
For simple sets like the box [-1,1]^n, the unit ball and the unit sphere (and natural reference measures including the Lebesgue measure), we show that the convergence rate to the global minimum of f is in O(1/r^2) and that this bound is tight for the minimization of linear polynomials.
Our analysis relies on an eigenvalue reformulation of the bounds and links to extremal roots of orthogonal polynomials, and the tightness result exploits a link to cubature rules.
This is based on joint work with Etienne de Klerk and Lucas Slot.
Dynamic programming algorithms for integer programming
Frederich Eisenbrand
EPFL, Switzerland
In this talk, I survey recent progress on the complexity of integer programming in the setting in which it lends itself to dynamic programming approaches. Some of the results are tight under the exponential time hypothesis (ETH). I will also mention open problems. For example, tight results for explicit upper bounds on the variables are however not yet known.
The talk is based on joint work with Robert Weismantel.
The support of integer optimal solutions
Timm Oertel
Cardiff University, UK
The support of a vector is the number of nonzero-components. We show that given an integral m x n matrix A, the integer linear optimization problem max{ c^Tx : Ax = b, x>=0, x in Z^n } has an optimal solution whose support is bounded by 2m log(2 sqrt(m) ||A||), where ||A|| is the largest absolute value of an entry of A. Compared to previous bounds, the one presented here is independent on the objective function. We furthermore provide a nearly matching asymptotic lower bound on the support of optimal solutions.
New Fourier interpolation formulas and optimization in Euclidean space
Maryna Viazovska
EPFL, Switzerland
Recently we have proven that a radial Schwartz function can be uniquely reconstructed from a certain discrete set of it's values and values of its Fourier transform. Being an interesting phenomenon on its own this interpolation formula allowed us to obtain sharp linear programming bounds in dimensions 8 and 24 and to prove universal optimality of E8 and Leech lattices.
This is joint work with H. Cohn, A. Kumar, Stephen D. Miller, and Danylo Radchenko.