10:00am - 12:00pmMatrix and tensor optimization
Chair(s): Max Pfeffer (Max Planck Institute MiS, Leipzig, Germany), André Uschmajew (Max Planck Institute MiS, Leipzig, Germany)
Matrix and tensor optimization has important applications in the context of modern data analysis and high dimensional problems. Specifically, low rank approximations and spectral properties are of interest. Due to their multilinear parametrization, sets of low rank matrices and tensors form sets with interesting, and sometimes challenging, geometric and algebraic structures. Studying such sets of tensors and matrices in the context of algebraic geometry is therefore not only helpful but also necessary for the development of efficient optimization algorithms and a rigorous analysis thereof. In this respect, the area of matrix and tensor optimization relates to the field applied algebraic geometry by the addressed problems and some of the employed concepts. In this minisymposium, we wish to bring the latest developments in both of these aspects to attention.
(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)
Tensorized Krylov subspace methods
Daniel Kressner
EPF Lausanne, Switzerland
Tensorized Krylov subspace methods are a versatile tool in numerical linear algebra for addressing large-scale applications that involve tensor product structure. This includes the discretization of high-dimensional PDEs, the solution of linear matrix equations, as well as low-rank updates and Frechet derivatives for matrix functions. This talk gives an overview of such methods, with an emphasis on theoretical properties and their connection to multivariate polynomials.
Critical points of quadratic low-rank optimization problems
Bart Vandereycken
University of Geneva, Switzerland
The absence of spurious local minima in certain non-convex minimization problems, e.g. in the context of recovery problems in compressed sensing, has recently triggered much interest due to its important implications on the global convergence of optimization algorithms. One example is low-rank matrix sensing under rank restricted isometry properties. It can be formulated as a minimization problem for a quadratic cost function constrained to a low-rank matrix manifold, with a positive semidefinite Hessian acting like a perturbation of identity on cones of low-rank matrices. We present an approach to show strict saddle point properties and absence of spurious local minima for such problems under improved conditions on the restricted isometry constants. This is joint work with André Uschmajew.
Matrix product states from an algebraic geometer’s point of view
Tim Seynnaeve
Max Planck Institute MiS, Leizpig, Germany
Matrix product states and uniform matrix product states play a crucial role in quantum physics and quantum chemistry. They are used, for instance, to compute the eigenstates of the Schrödinger equation. Matrix product states provide a way to represent special tensors in an efficient way and uniform matrix product states are partially symmetric analogs of matrix product states.
We apply methods from algebraic geometry to study uniform matrix product states. Our main results concern the topology of the locus of tensors expressed as uMPS, their defining equations and identifiability. By an interplay of theorems from algebra, geometry and quantum physics we answer several questions and conjectures posed by Critch, Morton and Hackbusch.
Computation of the norm of a nonnegative tensor
Antoine Gautier
Saarland University, Saarbruecken, Germany
The norm of a tensor can be computed by finding the maximal eigenvalue of a polynomial mapping. This problem is NP-hard in general. We present a nonlinear generalization of the Perron-Frobenius theorem which guarantees that the norm of tensors with nonnegative entries can be computed with a higher-order variant of the power method. This iterative algorithm has global optimal guarantees and a linear convergence rate. We discuss applications in nonconvex optimization and in the computation of centralities measure of multiplex networks.