Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS197, part 1: Numerical differential geometry
Time:
Tuesday, 09/Jul/2019:
10:00am - 12:00pm

Location: Unitobler, F-112
30 seats, 54m^2

Presentations
10:00am - 12:00pm

Numerical Differential Geometry

Chair(s): Tingran Gao (THE UNIVERSITY OF CHICAGO, United States of America), Ke Ye (Chinese Academy of Sciences)

The profound theory of differential geometry have interacted with the computational and statistical communities in the past decades, yielding fruitful outcomes in a wide range of fields including manifold learning, Riemannian optimization, and geometry processing. This minisymposium encourages researchers from applied differential geometry, optimization, manifold learning, and geometry processing to share their perspectives and technical tools on problems lying in the intersection of geometry and computations.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Introduction to Numerical Differential Geometry

Ke Ye
Chinese Academy of Sciences

It is quite a common phenomenon that data sets have special geometric strucutres, hence it is natural and convenient to model the problem on manifolds. In this talk, we provide an overview of the emerging impact of numerical differential geometry in areas of modern mathematical data science, including but not limited to manifold learning, Bayesian optimization, and geometry processing. As an illustrative example, we will present the framework and application of Riemannian optimization, with an emphasis on differential geometry as a guiding principle in the design and analysis of optimization algorithms.

 

A Riemannian Proximal Gradient Descent Method with Optimal Convergence Rate

Wen Huang
Xiamen University

We consider solving nonconvex and nonsmooth optimization problems with Riemannian manifold constraints. Such problems have received considerable attention due to many important applications such as sparse PCA, sparse blind deconvolution, robust matrix completion. In the Euclidean setting, proximal gradient method is an excellent method for solving nonconvex nonsmooth problems. However, in the Riemannian setting, the related work is still limited. In this talk, we briefly review the existing Riemannian proximal gradient methods and give an accelerated Riemannian proximal gradient with convergence analysis. Numerical experiments are used to demonstrate the performance of the proposedmethod.

This is joint work with Ke Wei at Fudan University.

 

Semi-Riemannian Manifold Optimization

Tingran Gao
The University of Chicago

We introduce a manifold optimization framework that utilizes semi-Riemannian structures on the underlying smooth manifolds. Unlike in Riemannian geometry, where each tangent space is equipped with a positive definite inner product, a semi-Riemannian manifold allows the metric tensor to be indefinite on each tangent space, i.e., possessing both positive and negative definite subspaces; differential geometric objects such as geodesics and parallel-transport can be defined on non-degenerate semi-Riemannian manifolds as well, and can be carefully leveraged to adapt Riemannian optimization algorithms to the semi-Riemannian setting. In particular, we discuss the metric independence of manifold optimization algorithms, and illustrate that the weaker but more general semi-Riemannian geometry often suffices for the purpose of optimizing smooth functions on smooth manifolds in practice. In addition, for many interesting matrix manifolds, closed-form expressions for geodesics and parallel-transports are much easier to obtain under the semi-Riemannian metric.