Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
|
|
Session Overview |
| Date: Wednesday, 10/Jul/2019 | |
| 8:25am - 8:30am | Announcements |
| vonRoll, Fabrikstr. 6, 001 | |
| 8:30am - 9:30am | IP03: Lauren K. Williams: Cluster algebras and applications to geometry |
| vonRoll, Fabrikstr. 6, 001 | |
|
|
8:30am - 9:30am
Cluster algebras and applications to geometry Harvard University, United States of America Cluster algebras are a class of commutative rings with a remarkable combinatorial structure, which were introduced by Fomin and Zelevinsky around 2000. I will give a gentle introduction to cluster algebras, and then explain how Grassmannians and more generally their Schubert varieties have a cluster algebra structure (joint work with Khrystyna Serhiyenko and Melissa Sherman-Bennett). If time permits, I will also discuss applications to toric degenerations and mirror symmetry (joint work with Konstanze Rietsch). |
| 8:30am - 9:30am | IP03-streamed from 001: Lauren K. Williams: Cluster algebras and applications to geometry |
| vonRoll, Fabrikstr. 6, 004 | |
| 9:30am - 10:00am | Coffee break |
| Unitobler, F wing, floors 0 and -1 | |
| 10:00am - 12:00pm | MS147, part 1: SC-square 2019 workshop on satisfiability checking and symbolic computation |
| Unitobler, F005 | |
|
|
10:00am - 12:00pm
SC-square 2019 workshop on satisfiability checking and symbolic computation Symbolic Computation is concerned with the algorithmic determination of exact solutions to complex mathematical problems; some recent developments in the area of Satisfiability Checking are starting to tackle similar problems, however with different algorithmic and technological solutions. The two communities share many central interests, but so far researchers from these two communities rarely interact. Furthermore, the lack of compatible interfaces for tools from the two areas is an obstacle to their fruitful combination. Bridges between the communities in the form of common platforms and road-maps are necessary to initiate a mutually beneficial exchange, and to support and direct their interaction. The aim of this workshop is to provide fertile ground to discuss, share knowledge and experience across both communities. The topics of interest include but are not limited to:
The 2016 and 2017 editions of the workshop were affiliated to conferences in Symbolic Computation. The 2018 edition was affiliated to FLoC, the international federated logic conference. More information at http://www.sc-square.org/workshops.html (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Invited Talk of SC-Square: SC-square-methods for the Detection of Hopf Bifurcations in Chemical Reaction Networks---Part I: Background and and basic methods The analytical problem of finding Hopf bifurcation fixed points for polynomial or rational vector fields (or determining that there are none) can be reduced to a purely semi-algebraic question. In the first part of the talk we explore this possibility by first giving a reduction of the parametric question on the existence of a Hopf bifurcation fixed point to a parametric first-order formula over the ordered fields of the real. We show the results of solving these with existing tools from computational logic (such as Redlog) for several standard and text book examples and compare the results of these fully automated methods to the ones of hand analyses given in textbooks. Invited Talk of SC-Square: SC-square-methods for the Detection of Hopf Bifurcations in Chemical Reaction Networks---Part II: Advanced methods for chemical reaction networks The determination of Hopf bifurcation fixed points in chemical reaction networks with symbolic rate constants yields information about the oscillatory behavior of the networks and hence is of high interest The problem is solvable in theory by the methods discussed in part I, but the generic technique leads to prohibitive large formulae even for rather small dimensions. Using the representations of chemical reaction systems in convex coordinates, which arise from the so called stoichiometric network analysis, the problem of determining the existence of Hopf bifurcation fixed points leads to first-order formulae over the ordered field of the reals that can then be solved using existing computational logic packages for somewhat larger dimensions. Using ideas from tropical geometry it is possible to formulate a more efficient method that is incomplete in theory but worked very well for the examples that we have attempted; we have shown it to be able to handle systems involving more than 20 species. Finding satisfying instances of a single (but in general very large) polynomial equation and a set of polynomial inequalities is the key challenge, which will benefit from further research in the context of SC-square-methods. Regular Paper 1 of SC-Square: Solving Constraint Systems from Traffic Scenarios for the Validation of Autonomous Driving The degree of automation in our daily life will grow rapidly. This leads to big challenges regarding the safety validation of autonomous robots which take over more and more tasks being -- as of yet -- predestinated for humans. This is in particular true for the emerging area of autonomous driving which aims at making road traffic safer, more efficient, more economic, and more comfortable. One promising approach for the safety validation of autonomous driving is the virtual simulation of traffic scenarios, i.e. conducting the majority of tests in virtual reality instead of the real world. In addition to quantity, the quality of such tests with a focus on critical traffic scenarios will be an essential ingredient for safety validation. Regular Paper 2 of SC-Square: On the proof complexity of MCSAT Satisfiability Modulo Theories (SMT) and SAT solvers are critical components in many formal software tools, primarily due to the fact that they are able to easily solve logical problem instances with millions of variables and clauses. This efficiency of solvers is in surprising contrast to the traditional complexity theory position that the problems that these solvers address are believed to be hard in the worst case. In an attempt to resolve this apparent discrepancy between theory and practice, theorists have proposed the study of these solvers as proof systems that would enable establishing appropriate lower and upper bounds on their complexity. For example, in recent years it has been shown that SAT solvers are polynomially equivalent to the general resolution proof system for propositional logic, and SMT solvers that use the CDCL(T) architecture are polynomially equivalent to the Res∗(T) proof system. In this paper, we extend this program to the MCSAT approach for SMT solving by showing that the MCSAT architecture is polynomially equivalent to the Res∗(T) proof system. Thus, we establish an equivalence between CDCL(T) and MCSAT from a proof-complexity theoretic point of view. This is a first and essential step towards a richer theory that may help (parametrically) characterize the kinds of formulas for which MCSAT-based SMT solvers can perform well. |
| 10:00am - 12:00pm | MS143, part 2: Algebraic geometry in topological data analysis |
| Unitobler, F006 | |
|
|
10:00am - 12:00pm
Algebraic geometry in topological data analysis In the last 20 years methods from topology, the mathematical area that studies “shapes", have proven successful in studying data that is complex, and whose underlying shape is not known a priori. This practice has become known as topological data analysis (TDA). As additional methods from topology still find their application in the study of complex structure in data, the practice is evolving and expanding, and now moreover draws increasingly upon data science, computer science, computational algebra, computational topology, computational geometry, and statistics. While ideas from category theory, sheaf theory and representation theory of quivers have driven the theoretical development in the past decade, in the last years ideas from commutative algebra and algebraic geometry have started to be used to tackle some theoretical problems in TDA. The aim of the minisymposium is to seize this momentum and to bring together experts in algebraic geometry and researchers in topological data analysis to explore new avenues of research and foster research collaborations. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) High-throughput topological screening of nanoporous materials Thanks to the Materials Genome Initiative, there is now a database of millions of different classes of nanoporous materials, in particular zeolites. In this talk I will sketch a computational approach to tackle high-throughput screening of this database to find the the best nano-porous materials for a given application, using a topological data analysis-based descriptor (TD) recognizing pore shapes. For methane storage and carbon capture applications, our method enables us to predict performance properties of zeolites. When some top-performing zeolites are known, TD can be used to efficiently detect other high-performing materials with high probability. We expect that this approach could easily be extended to other applications by simply adjusting one parameter: the size of the target gas molecule. Sampling real algebraic varieties for topological data analysis I will discuss an adaptive algorithm for finding provably dense samples of points on a real algebraic variety given the variety's defining polynomials as input. Our algorithm utilizes methods from numerical algebraic geometry to give formal guarantees about the density of the sampling and it also employs geometric heuristics to reduce the size of the sample. As persistent homology methods consume significant computational resources that scale poorly in the number of sample points, our sampling minimization makes applying these methods more feasible. I will also present results of applying persistent homology to point samples generated by an implementation of the algorithm. How wild is the homological clustering problem? Connected components form the basis of many clustering methods, often requiring a choice of two parameters (geometric scale and density). Applying 0th homology yields a diagram of vector spaces reflecting the connected components, with surjections in the scale parameter direction. This motivates the study of the parameter landscape by means of quiver representations: indecomposable summands can be interpreted as topological features. We identify all cases where the set of possible indecomposables has a simple classification (finite type or tame). The result is obtained using tilting theory and a novel equivalence theorem on cotorsion-torsion triples, whose development has been motivated by the clustering problem. Learning elliptic curves Elliptic curves are all homeomorphic as topological spaces, more precisely, they are all real tori of dimension two. However, they carry infinitely many different complex structures. The topological structure can be detected easily by the tools of persistent homology, but can we also recover the complex structure? In other words, can we "learn" an elliptic curve from data? In my talk I would like to address this question. |
| 10:00am - 12:00pm | MS200, part 1: From algebraic geometry to geometric topology: Crossroads on applications |
| Unitobler, F007 | |
|
|
10:00am - 12:00pm
From algebraic geometry to geometric topology: crossroads on applications The purpose of the Minisymposium "From Algebraic Geometry to Geometric Topology: Crossroads on Applications" is to bring together researchers who use algebraic, combinatorial and geometric topology in industrial and applied mathematics. These methods have already seen applications in: biology, physics, chemistry, fluid dynamics, distributed computing, robotics, neural networks and data analysis. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Momentum of vortex tangles by weighted area information A method based on the interpretation of classical linear and angular momentum of vortex dynamics in terms of weighted areas of projected graphs of filament tangles has been introduced to provide an accurate estimate of physical information when no analytical description is available [1,2]. The method is implemented here for defects governed by the Gross-Pitaevskii equation [3]. New results based on direct application of this method to determine the linear momentum associated with interacting vortex rings, links and knots are presented and discussed in detail. The method can be easily extended and adapted to more complex systems, providing a useful tool for real-time diagnostics of dynamical properties of networks of filamentary structures. This is joint work with Simone Zuccher (U. Verona). [1] Ricca, R.L. (2008) Momenta of a vortex tangle by structural complexity analysis. Physica D 237, 2223-2227. [2] Ricca, R.L. (2013) Impulse of vortex knots from diagram projections. In Topological Fluid Dynamics: Theory and Applications (ed. H.K. Moffatt et al.), pp. 21-28. Procedia IUTAM 7, Elsevier. [3] Zuccher, S. & Ricca, R.L. (2019) Momentum of vortex tangles by weighted area information. Submitted. Alexandrov spaces and topological data analysis Alexandrov spaces (with curvature bounded below) are metric generalizations of complete Riemannian manifolds with a uniform lower sectional curvature bound. In this talk I will discuss the geometric and topological properties of these metric spaces and how they arise in the context of topological data analysis. Geometrical and topological analysis of chromosome conformation capture data Despite the impressive development of methods to analyze Chromosome Conformation Capture (CCC) data, the topology of any genome still remains unknown. The output of a CCC experiment is a matrix of pairwise contact probabilities between genomic loci from which a map of distances, called distance map, can be obtained. In this work we use distance geometry and random knotting arguments to derive some rigorous results for the interpretation of distance maps. In particular we provide a rigorous characterization of the distance map of a knot and of some of its symmetries. We end by presenting a key result that shows that in the presence of noise the topology of a chromosome cannot be recovered from a distance map. Joint work with: K. Ishihara, K. Lamb, M. Pouokam, K. Shimokawa, and M. Vazquez Asymptotic behavior of the homology of random polyominoes In this talk we study the rate of growth of the expectation of the number of holes (the rank of the first homology group) in a polyomino with uniform and percolation distributions. We prove the existence of linear bounds for the expected number of holes of a polyomino with respect to both the uniform and percolation distributions. Furthermore, we exhibit particular constants for the upper and lower bounds in the uniform distribution case. This results can be extended, using the same techniques, to other polyforms and higher dimensions. |
| 10:00am - 12:00pm | MS177, part 2: Algebraic and combinatorial phylogenetics |
| Unitobler, F011 | |
|
|
10:00am - 12:00pm
Algebraic and combinatorial phylogenetics Since late eighties, algebraic tools have been present in phylogenetic theory and have been crucial in understanding the limitations of models and methods and in proposing improvements to the existing tools. In this session we intend to present some of the most recent work in this area. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Weighting the Coalescent Under the coalescent model, the dominant quartet should match the topology on the species tree. However, in practice we only have a finite sample of gene trees from which to estimate the dominant quartet. We introduce a quartet weighting system which enables accurate species tree reconstruction when combined with a quartet amalgamation algorithm such as MaxCut. The weighting system also provides a mechanism for determining which data should be included in their analysis. Identifiability of 2-tree mixtures for the Kimura 3ST model The inference of evolutionary trees from molecular sequence data relies on modeling site substitutions by a Markov process on a phylogenetic tree, or by a mixture of such processes in a number of trees (not necessarily distinct). The identifiability of the parameters of the models is a crucial feature for this inference process to be consistent. From an algebraic geometry perspective, the unmixed substitution models can be described in terms of some algebraic varieties associated to the tree topologies while the mixtures of these models correspond to the join of these varieties (secant varieties, when the trees considered are the same). The identifiability of the 2-tree mixtures (mixed models obtained from two tree topologies) under the so-called group-based models with 4 states has been deeply studied and, in particular, Allman et al. 2011 proved that under the JC and K2P models, it is possible to distinguish unmixed models from mixtures obtained from two trees. A key point in the proof is the existence of linear constraints that allow us to distinguish between different tree topologies. Unfortunately, such linear equations do not exist for the more general K3P model. In this talk, we will recall some general facts on mixed models and state some results of our joint work with Marta Casanellas and Alessandro Oneto. In particular, we will present some advances related to the generic identifiability of tree parameters under the K3P model. Markov association schemes This work concerns a compelling example of the mathematics of phylogenetics leading to a novel algebraic/combinatorial structure. The motivation for this work comes from a simple model of aminoacyl-tRNA synthetase (aaRS) evolution devised by Julia Shore (UTAS) and Peter Wills (U Auckland). Starting with a proposed rooted tree describing the specialization of aaRS through evolution of the genetic code, their model produces a space of symmetric Markov rate matrices that form a commutative algebra under matrix multiplication. We refer to each of these as a `tree-algebra'. From their construction, one most naturally expects that the tree-algebras occur as special instances of association schemes (which are well-studied in algebraic combinatorics). However, this is incorrect as one finds that a tree-algebra corresponds to an association scheme only in a highly degenerate case. In fact, further study has revealed that both the tree-algebras and association schemes can be conceived of as occurring as special cases of a novel class of combinatorial structures, which we (possibly imperfectly) refer to as `Markov association schemes'. In this talk, I will describe our attempts thus far to characterize Markov association schemes. In particular, I will present two natural binary operations of `sum' and `product' on the class of schemes and show that the tree algebras arise precisely from repeatedly applying the sum operation to the trivial scheme. Existence of maximally probable ranked gene tree topologies with a matching unranked topology A ranked gene tree topology is a labeled gene tree topology together with a temporal ordering (a ranking) of its coalescence events. A species tree is a labeled species tree topology considered with a set of lengths for its branches that naturally induces a ranking of the coalescence events present in the tree. Disregarding the ordering of the internal nodes of a ranked tree yields a leaf labeled tree topology, which is the unranked topology of the tree. When exactly one gene copy is sampled for each species, we consider ranked gene tree topologies realized in a ranked species tree under the multispecies coalescent model, and study the unranked topology of the ranked gene tree topologies with the largest conditional probability. We show that among the ranked gene tree topologies that are maximally probable, there is always at least one whose unranked topology matches that of the species tree. We also show that not all of the maximally probable ranked gene tree topologies have a concordant unranked topology. |
| 10:00am - 12:00pm | Room free |
| Unitobler, F012 | |
| 10:00am - 12:00pm | MS156: Tropical geometry in statistics |
| Unitobler, F013 | |
|
|
10:00am - 12:00pm
Tropical geometry in statistics Classically, statistics is the branch of mathematics that deals with data. The challenges of modern data demand the development of new statistical methods to handle them. Modern data collection technology brings not only “big data” that are extremely high dimensional, but additionally, they are made up of complex structures, which can be prohibitive to the Euclidean setting of classical statistics. Tropical geometry defines and studies piecewise linear structures in an algebraic framework that, if interpreted appro- priately, is amenable to modern data structures and challenges. This session focuses on leveraging the potential of tropical geometry to reinterpret classical statistics and enhance the utility of statistical methodology in the face of modern data challenges. Specifically, we seek to adapt the linearizing properties of the tropical semiring to statistical settings that rely on principles of linear algebra and optimization. These encompass fundamental descriptive and inferential statistics, such as the computation of Fréchet means, principal component analysis, linear regression, and hypothesis testing. This is a very new direction of research with potential for wide-reaching applications from biology to economics, and it is our hope to bring together researchers to develop and advance the interaction between tropical geometry and statistics. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Tropical principal component analysis We introduce a notion of principal component analysis in the setting of tropical geometry. We also describe some results on the containment of a Stiefel linear space within a larger tropical linear space and apply them to our setting of tropical principal component analysis. Tropical Foundations for Probability and Statistics on Phylogenetic Tree Spaces A geometric approach to phylogenetic tree space was first introduced by Billera, Holmes, and Vogtmann. We reinterpret the tree space via tropical geometry and introduce a novel framework for the statistical analysis of phylogenetic trees: the palm tree space, which represents phylogenetic trees as points in a space endowed with the tropical metric. We show that the palm tree space possesses a variety of properties that allow for the definition of probability measures, and thus expectations, variances, and other fundamental statistical quantities. In addition, they lead to increased computational efficiency. Our approach provides a new, tropical basis for a statistical treatment of evolutionary biological processes represented by phylogenetic trees. This is a joint work with Anthea Monod (Columbia University, USA) and Ruriko Yoshida (Naval Postgraduate School, USA). Tropical Gaussians There is a growing need for a systematic study of probability distributions in tropical settings. Over the classical algebra, the Gaussian measure is arguably the most important distribution to both theoretical probability and applied statistics. In this work, we review the existing analogues of the Gaussian measure in the tropical semiring and outline various research directions. Tropical hardware for data intensive applications: DNA sequence alignment to machine learning New encodings are being explored to allay the energy efficiency concerns, that fundamentally limit performance of data intensive, modern computing systems. One such brain inspired encoding, known as Race Logic, encodes information in the arrival time of signals. With such an encoding, conventional-computing gates such as OR, AND and delay gates perform MIN, MAX and addition-by-constant operations respectively. Hence with we end up with elegant hardware implementations for the fundamental operations of tropical algebra. This allows tropical operations to be easily expressed with temporally-coded hardware, which allows data intensive problems to be solved with low latency, low energy computer architectures. One such architecture is a DNA sequence alignment engine which calculates the edit distance between two input sequences. Our architecture physically implements the dynamic programming nature of tropical graph traversal methods, on a programmable edit-graph. The other architecture describes a programmable methodology of mapping various decision tree based forests to MIN/MAX gates and is used for high throughput in-sensor image classification. The main message of this talk is to stress on the symbiosis between tropical algebra and computing hardware communities, which we believe can lead to development of compact, energy efficient, computing hardware for new classes of complex optimization problems.
|
| 10:00am - 12:00pm | MS182, part 2: Matrix and tensor optimization |
| Unitobler, F021 | |
|
|
10:00am - 12:00pm
Matrix and tensor optimization Matrix and tensor optimization has important applications in the context of modern data analysis and high dimensional problems. Specifically, low rank approximations and spectral properties are of interest. Due to their multilinear parametrization, sets of low rank matrices and tensors form sets with interesting, and sometimes challenging, geometric and algebraic structures. Studying such sets of tensors and matrices in the context of algebraic geometry is therefore not only helpful but also necessary for the development of efficient optimization algorithms and a rigorous analysis thereof. In this respect, the area of matrix and tensor optimization relates to the field applied algebraic geometry by the addressed problems and some of the employed concepts. In this minisymposium, we wish to bring the latest developments in both of these aspects to attention. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Matrix and Tensor Factorizations with Nonnegativity In this talk we survey recent essential developments of the ideas of low-rank matrix approximation and consider their extensions to tensors. The practical importance of the very approach consists in its paradigma of using only small part of matrix entries that allows one to construct a sufficiently accurate appoximation in a fast way for ”big data” matrices that cannot be placed in any available computer memory and are accessed implicitly through calls to a procedure producing any individual entry in demand. We consider how this approach can be used in the cases when we need to maintain nonnegativity of the elements. Decompositions and optimizations of conjugate symmetric complex tensors Conjugate partial-symmetric (CPS) tensors are the high-order generalization of Hermitian matrices. As the role played by Hermitian matrices in matrix theory and quadratic optimization, CPS tensors have shown growing interest in tensor theory and optimization, particularly in applications including radar signal processing and quantum entanglement. We study CPS tensors with a focus on ranks, rank-one decompositions and optimizations over the spherical constraint. We prove and propose a constructive algorithm to decompose any CPS tensor into a sum of rank-one CPS tensors. Three types of ranks for CPS tensors are defined and shown to be different in general. This leads to the invalidity of the conjugate version of Comon's conjecture. We then study rank-one approximations and matricizations of CPS tensors. By carefully unfolding CPS tensors to Hermitian matrices, rank-one equivalence can be preserved. This enables us to develop new convex optimization models and algorithms to compute best rank-one approximations of CPS tensors. Numerical experiments from various data are performed to justify the capability of our methods. Chebyshev polynomials and best rank-one approximation ratio We establish a new extremal property of the classical Chebyshev polynomials in the context of the theory of rank-one approximations of tensors. We also give some necessary conditions for a tensor to be a minimizer of the ratio of spectral and Frobenius norms. This is joint work with Andrei Agrachev and André Uschmajew. Optimization methods for computing low rank eigenspaces We consider the task of approximating the eigenspace belonging to the lowest eigenvalues of a self-adjoint operator on a space of matrices, with the condition that it is spanned by low rank matrices that share a common row space of small dimension. Such a problem arises for example in the DMRG algorithm in quantum chemistry. We propose a Riemannian optimization method based on trace minimization that takes orthogonality and low rank constraints simultaneously into account, and shows better numerical results in certain scenarios compared to other current methods. This is joint work with Christian Krumnow and Max Pfeffer. |
| 10:00am - 12:00pm | MS130, part 1: Polynomial optimization and its applications |
| Unitobler, F022 | |
|
|
10:00am - 12:00pm
Polynomial optimization and its applications The importance of polynomial (aka semi-algebraic) optimization is highlighted by the large number of its interactions with different research domains of mathematical sciences. These include, but are not limited to, automatic control, combinatorics, and quantum information. The mini-symposium will focus on the development of methods and algorithms dedicated to the general polynomial optimization problem. Both the theoretical and more applicative viewpoints will be covered. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) The Geometry of SDP-Exactness in Quadratic Optimization Consider the problem of minimizing a quadratic objective subject to quadratic equations. We study the semialgebraic region of objective functions for which this problem is solved by its semidefinite relaxation. For the Euclidean distance problem, this is a bundle of spectrahedral shadows surrounding the given variety. We characterize the algebraic boundary of this region and we derive a formula for its degree. This is joint work with Corey Harris and Bernd Sturmfels. Semidefinite representations of the set of separable states The convex set of separable states plays a fundamental role in quantum information theory and corresponds to the set of non-entangled states. In this talk I will discuss the question of (exact) semidefinite representations for this convex set. Using connections with nonnegative polynomials and sums of squares I will characterize the cases when this set has, or not, an SDP representation. Noncommutative polynomial optimization and quantum graph parameters Graph parameters such as the stability number and chromatic number can be formulated in several ways. For example as polynomial optimization problems or using nonlocal games (in which two separated parties must convince a referee that they have a valid stable set/coloring of the graph of a certain size). After recalling these formulations, we show how they can be used in quantum information theory to study the power of entanglement. The formulation in terms of nonlocal games gives rise to quantum versions of these graph parameters. The polynomial optimization perspective provides hierarchies of semidefinite programming bounds on the classical parameters and we show how the framework of noncommutative polynomial optimization can be used to obtain analogous hierarchies on the quantum graph parameters. This approach unifies several existing bounds on the quantum graph parameters. On Convexity of Polynomials over a Box In the first and main part of this talk, I show that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can test whether a cubic polynomial is convex over a box. This result is minimal in the degree of the polynomial and in some sense justifies why convexity detection in nonlinear optimization solvers is limited to quadratic functions or functions with special structure. As a byproduct, the proof shows that the problem of testing whether all matrices in an interval family are positive semidefinite is strongly NP-hard. This problem, which was previously shown to be (weakly) NP-hard by Nemirovski, is of independent interest in the theory of robust control. I will explain the differences between weak and strong NP-hardness clearly and show how our proof bypasses a step in Nemirovski's reduction that involves "matrix inversion". Indeed, while this operation takes polynomial time, it can result in an exponential increase in the numerical value of the rational numbers involved. In the second and shorter part of the talk, I present sum-of-squares-based semidefinite relaxations for detecting or imposing convexity of polynomials over a box. I do this in the context of the convex regression problem in statistics. I also show the power of this semidefinite relaxation in approximating any twice continuously differentiable function that is convex over a box. |
| 10:00am - 12:00pm | MS163: Theory and methods for tensor decomposition |
| Unitobler, F023 | |
|
|
10:00am - 12:00pm
Theory and methods for tensor decomposition Tensors are a ubiquitous data structure with applications in numerous fields, including machine learning and big data. Decomposing a tensor is important for understanding the structure of the data it represents. Furthermore, there are different ways to decompose tensors, each of which poses its own theoretical and computational challenges and has its own applications. In our minisymposium, we will bring together researchers from different communities to share their recent research discoveries in the theory, methods, and applications of tensor decomposition. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) A nearly optimal algorithm to decompose binary forms Symmetric tensor decomposition is equivalent to Waring’s problem for homogeneous polynomials; that is, to write a homogeneous polynomial in n variables of degree D as a sum of D-th powers of linear forms, using the minimal number of summands. We focus on decomposing binary forms, a problem that corresponds to the decomposition of symmetric tensors of dimension 2 and order D. We present the first quasi-linear algorithm to decompose binary forms. It computes a symbolic decomposition in O(M(D)log(D)) arithmetic operations, where M(D) is the complexity of multiplying two polynomials of degree D. We also bound the algebraic degree of the problem by min(rank, D − rank + 1) and show that this bound is tight. On convergence of matrix and tensor approximate diagonalization algorithms by unitary transformations Jacobi-type methods are commonly used in signal processing for approximate diagonalization of complex matrices and tensors by unitary transformations. In this paper, we propose a gradient-based Jacobi algorithm and prove several convergence results for this algorithm. We establish global convergence rates for the norm of the gradient and prove local linear convergence under mild conditions.The convergence results also apply to the case of approximate orthogonal diagonalisation of real-valued tensors. Non-linear singular value decomposition In data mining, machine learning, and signal processing, among others, many tasks such as dimensionality reduction, feature extraction, and classification are often based on the singular value decomposition (SVD). As a result, the usage and computation of the SVD have been extensively studied and well understood. However, as current models take into account the non-linearity of the world around us, non-linear generalizations of the SVD are needed. We present our ideas on this topic. In particular, we aim at decomposing nonlinear multivariate vector functions with the following three goals in mind: 1. to provide an interpretation of the underlying processes or phenomena, 2. to simplify the model by reducing the number of parameters, 3. and to preserve its descriptive power. We use tensor techniques to achieve these goals and briefly discuss the potential of this approach for inverting nonlinear functions and curve fitting. A symmetrization approach to hypermatrix {SVD} We describe how to derive the third order hypermatrix SVD from the spectral decomposition of third order hypermatrices resulting from the product of transposes of a given third order hypermatrix. |
| 10:00am - 12:00pm | MS148, part 2: Algebraic neural coding |
| Unitobler, F-105 | |
|
|
10:00am - 12:00pm
Algebraic Neural Coding Neuroscience aims to decipher how the brain represents information via the firing of neurons. Place cells of the hippocampus have been demonstrated to fire in response to specific regions of Euclidean space. Since this discovery, a wealth of mathematical exploration has described connections between the algebraic and combinatorial features of the firing patterns and the shape of the space of stimuli triggering the response. These methods generalize to other types of neurons with similar response behavior. At the SIAM AG meeting, we hope to bring together a group of mathematicians doing innovative work in this exciting field. This will allow experts in commutative algebra, combinatorics, geometry and topology to connect and collaborate on problems related to neural codes, neural rings, and neural networks. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Sunflowers of Convex Sets and New Obstructions to Convexity Any collection of convex open sets in R^d gives rise to an associated neural code. The question of which codes can be realized in this way has been an open problem for a number of years, and although recent literature has described rich combinatorial and geometric obstructions to convexity, a full classification (even conjectural) is far out of reach. I will describe some new obstructions based on sunflowers of convex open sets, and show how these obstructions differ fundamentally from those which have been investigated previously. Convex Codes and Oriented Matroids Convex neural codes describe the intersection patterns of collections of convex open sets. Representable oriented matroids describe the intersection patterns of collections of half spaces—that is, of convex sets with convex complements. It is thus natural to view convex codes as a generalization of oriented matroids. In this talk, we will make this relationship precise. First, using a new notion of neural code morphism, we show that a code has a realization with convex polytopes if and only if it is the image of a representable matroid under such a morphism. This allows us to translate the problem of whether a code has a convex polytope realization into a matroid completion problem. Next, we enumerate all neural codes which are images of small representable matroids, and use the relationship between convex codes and oriented matroids to define new signatures of convexity and non-convexity. This is joint work with Alex Kunin and Zvi Rosen. Sufficient Conditions for 1- and 2- Inductively Pierced Codes Neural codes are binary codes in {0, 1}^n ; here we focus on the ones which represent the firing patterns of a type of neurons called place cells. There is much interest in determining which neural codes can be realized by a collection of convex sets. However, drawing representations of these convex sets, particularly as the number of neurons in a code increases, can be very difficult. Nevertheless, for a class of codes that are said to be k-inductively pierced for k = 0, 1, 2 there is an algorithm for drawing Euler diagrams. Here we use the toric ideal of a code to show sufficient conditions for a code to be 1- or 2-inductively pierced, so that we may use the existing algorithm to draw realizations of such codes. Progress Toward a Classification of Inductively Pierced Codes via Polyhedra A difficult problem in the field of combinatorial neural codes is to determine when a given code can be represented in the plane as intersections of convex sets and their complements. If the code is 2-inductively pierced, then there exists a polynomial-time algorithm which constructs such a representation in the plane and which uses closed discs as the convex sets. Recently, Gross, Obatake, and Youngs provided a way to classify 2-inductively pierced codes for up to three neurons by considering a special weight order on ideals of polynomials associated to the codes. In this talk, we present progress toward extending their result for an arbitrary number of neurons. We focus on the use of state polytopes of homogeneous toric ideals, which encode their distinct reduced Gröbner bases. It is the properties of these bases that we aim to connect to being 2-inductively pierced. |
| 10:00am - 12:00pm | MS151, part 2: Cluster algebras and positivity |
| Unitobler, F-106 | |
|
|
10:00am - 12:00pm
Cluster algebras and positivity Cluster algebras are commutative rings whose generators and relations can be defined in a remarkably succinct recursive fashion. Algebras of this kind, introduced by Fomin and Zelevinsky in 2000, are equipped with a powerful combinatorial structure frequently appearing in many mathematical contexts such as Lie theory, triangulations of surfaces, Teichmueller theory and beyond. Coordinate rings of Grassmannians and related invariant rings are well-studied examples of algebras of this type. One important aspect arising from the intrinsic combinatorial structure of cluster algebras is that it uncovers systematic, intriguing and complex positivity properties in these families of rings. For instance, it is expected that for each cluster algebra there is a distinguished basis, such that all elements can be expressed as a "positive" linear combination of basis vectors. Seemingly elementary claims of this type, so far proved only in certain cases, have triggered important developments in research areas at the intersection of geometry, algebra and combinatorics. In this session, we glimpse at recent developments in this field and discuss open questions. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Combinatorics of cluster structures in Schubert varieties The (affine cone over the) Grassmannian is a prototypical example of a variety with cluster structure. Scott (2006) gave a combinatorial description of this cluster algebra in terms of Postnikov's plabic graphs. It has been conjectured essentially since Scott's result that Schubert varieties also have a cluster structure with a description in terms of plabic graphs. I will discuss recent work with K. Serhiyenko and L. Williams proving this conjecture. The proof uses a result of Leclerc, who shows that many Richardson varieties in the full flag variety have cluster structure using cluster-category methods, and a construction of Karpman to build plabic graphs for each Schubert variety. Cluster tilting modules for mesh algebras Mesh algebras are a class of finite-dimensional algebras which generalize naturally preprojective algebras. In this talk, I describe cluster tilting modules for mesh algebras of Dynkin type and discuss possible relations to skew-symmetrizable cluster algebras structures in certain coordinate rings. This is joint work with K. Erdmann and S. Gratz. Strings, snake graphs and the cluster expansion formulas Snake graphs arise from cluster algebras associated to triangulations of marked oriented surfaces in the work of Musiker, Schiffler and Williams in the context of Laurent expansion formulas. In this talk we will show a correspondence between snake graphs and combinatorial objects called strings. String combinatorics has first arising in the context of the classification of indecomposable modules in a large class of tame algebras, the so-called special biserial algebras. We will show how this new interpretation of snake graphs in terms of strings leads to an alternative cluster expansion formula for cluster algebras arising from triangulations of surfaces. This is joint work with Ilke Canakci as well as joint work in progress with Nathan Reading and Vincent Pilaud. Friezes and Grassmannian cluster structures In this talk, I will show how to obtain SL_k-friezes using Plücker coordinates by taking certain subcategories of the Grassmannian cluster categories. These are cluster structures associated to the Grassmannians of k-spaces in n-space. Many of these friezes arise from specialising a cluster to 1. We use Iyama-Yoshino reduction to reduce the rank of such friezes. This is joint work with E. Faber, S. Gratz, G. Todorov, K. Serhyenko. https://arxiv.org/abs/1810.10562 |
| 10:00am - 12:00pm | MS140, part 2: Multivariate spline approximation and algebraic geometry |
| Unitobler, F-107 | |
|
|
10:00am - 12:00pm
Multivariate spline approximation and algebraic geometry The focus of the proposed minisymposium is on problems in approximation theory that may be studied using techniques from commutative algebra and algebraic geometry. Research interests of the participants relevant to the minisymposium fall broadly under multivariate spline theory, interpolation, and geometric modeling. For instance, a main problem of interest is to study the dimension of the vector space of splines of a bounded degree on a simplicial complex; recently there have been several advances on this front using notions from algebraic geometry. Nevertheless this problem remains elusive in low degree; the dimension of the space of piecewise cubics on a planar triangulation (especially relevant for applications) is still unknown in general. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Bounds on the dimension of spline spaces on polyhedral cells We study the space of spline functions defined on polyhedral cells. These cells are the union of 3-dimensional polytopes sharing a common vertex, so that the intersection of any two of the polytopes is a face of both. In the talk, we will present new bounds on the dimension of this spline space. We provide a bound on the contribution of the homology term to the dimension count, and prove upper and lower bounds on the ideal of the interior vertex which depend only on combinatorial (or matroidal) information of the cell. We use inverse systems to convert the problem of finding the dimension of ideals generated by powers of linear forms to a computation of dimensions of so-called fat point ideals. The fat point schemes that comes from dualizing polyhedral cells is particularly well-suited and leads to the exact dimension in many cases of interest that will also be presented in the talk. On the gradient conjecture for homogeneous polynomials The following conjecture was proposed by myself and Tom McKinley: Let p and f be homogeneous polynomials in n variables such that p(gradf)=0. Then p(grad)f=0. This intriguing conjecture is closely related to the work of Gordan and Noether on polynomials with vanishing Hessians and with some density problems proposed by Pinkus and Wainryb. In my talk I will indicate some particular cases when the conjecture holds true. In particular when the number of variables is at most 5 and when the deg(p)=2. Ambient Spline Approximation of Functions on Submanifolds Recently, a novel approach to approximation of functions on submanifolds has been made: It is based on extending the function constantly along the normals and approximating this extension by functions that exist in the ambient space with tensor product splines as a prominent example. For those, the concept is able to essentially reproduce the convergence orders on the submanifold one can expect in the ambient space. In the talk, we will give an introduction into the basic concept along with some theoretical results and numerical experiments. Watertight Trimmed NURBS Surfaces Trimmed NURBS are the standard for industrial surface modeling, and all common data exchange formats, like IGES or STEP, are based on them. Typically, trimming curves have so high degree and so complex knot structure that it seems to be impossible to match them properly to neighboring geometry. Thus, surfaces built from several trimmed NURBS patches are known to reveal gaps along inner boundaries, and it is a cumbersome and sometimes nontrivial task for designers to keep the magnitude of these gaps below an acceptable tolerance. In this talk, we present a novel methodology to construct trimmed NURBS surfaces with prescribed low order boundary curves, facilitating the representation of watertight surface models within the functionality of standard CAD systems. |
| 10:00am - 12:00pm | MS149, part 2: Stability of moment problems and super-resolution imaging |
| Unitobler, F-111 | |
|
|
10:00am - 12:00pm
Stability of moment problems and super-resolution imaging Algebraic techniques have proven useful in different imaging tasks such as spike reconstruction (single molecule microscopy), phase retrieval (X-ray crystallography), and contour reconstruction (natural images). The available data typically consists of (trigonometric) moments of low to moderate order and one asks for the reconstruction of fine details modeled by zero- or positive-dimensional algebraic varieties. Often, such reconstruction problems have a generically unique solution when the number of data is larger than the degrees of freedom in the model. Beyond that, the minisymposium concentrates on simple a-priori conditions to guarantee that the reconstruction problem is well or only mildly ill conditioned. For the reconstruction of points on the complex torus, popular results ask the order of the moments to be larger than the inverse minimal distance of the points. Moreover, simple and efficient eigenvalue based methods achieve this stability numerically in specific settings. Recently, the situation of clustered points, points with multiplicities, and positive-dimensional algebraic varieties have been studied by similar methods and shall be discussed within the minisymposium. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) The condition number of Vandermonde matrices with clustered nodes The condition number of rectangular Vandermonde matrices with nodes on the complex unit circle is important for the stability analysis of algorithms that solve the trigonometric moment problem, e.g. Prony's method. In the univariate case and with well separated nodes, the condition number is well studied, but when nodes are close together, it gets more complicated. For this setting, there exist only few results so far. After providing a short survey over recent developments, our own results are presented. Prony's problem and the hyperbolic cross Multivariate extensions of Prony's problem have been actively investigated over the last few years. One problem has been that it is less clear than in the univariate case what sampling sets are optimal. One choice, proposed by Sauer, are sets linked to the hyperbolic cross. We state how and why the hyperbolic cross emerges and give an even smaller sampling set than Sauer, but without an efficient algorithm. Then we derive small sampling sets for multivariate extensions of MUSIC and ESPRIT. Using these sets, the algorithms need significantly less samples compared to the full grid. Reconstruction of generalized exponential sums Exponential sums, as used in signal processing, are functions that can be considered to encode the moments of measures supported at finitely many points. Algebraic techniques, such as Prony's method, are used to recover the underlying data of such measures from moments. After introducing these notions, we provide generalizations of the concept of an exponential sum. We follow an algebraic and geometric approach by associating algebraic varieties to these generalized objects and investigate the problem of parameter recovery in this setting. Phase retrieval of sparse continuous-time signals by Prony's method The phase retrieval problem basically consists in recovering a complex-valued signal from the modulus of its Fourier transform. In other words, the phase of the signal in the frequency domain is lost. Recovery problems of this kind occur in electron microscopy, crystallography, astronomy, and communications. The long history of phase retrieval include countless approaches to find an analytic or a numerical solution, which is generally challenging due to the well-known ambiguousness. In order to solve the phase retrieval problem nevertheless, we assume that the unknown continuous-time signal is sparse in the sense that the signal is a superposition of shifted Dirac delta functions or can be represented by a non-uniform spline of certain order. The main question is now: can we always recover the parameters of the unknown signal from the given Fourier intensity? Using a constructive proof, we show that almost all sparse signals consisting of finitely many spikes at arbitrary locations can be uniquely recovered up to trivial ambiguities - up to rotations, time shifts, and conjugated reflections. An analogous result holds for spline functions of arbitrary order. The proof itself consists of two main steps. Exploiting that the autocorrelation function of the sparse signal is here always an exponential sum, we firstly apply Prony's method to recover the unknown parameters (coefficients and frequencies) of the autocorrelation. In a second step, we use this information to derive the unknown parameters of the true signal. Finally, we illustrate the proposed method at different numerical examples. |
| 10:00am - 12:00pm | Room free |
| Unitobler, F-112 | |
| 10:00am - 12:00pm | MS166, part 1: Computational aspects of finite groups and their representations |
| Unitobler, F-113 | |
|
|
10:00am - 12:00pm
Computational aspects of finite groups and their representations The theory of finite groups and their representations is not only an interesting topic for mathematicians but also provides powerful tools in solving problems in science. New computational tools are making this even more feasible. To name a few, one may find applications in physics, coding theory and cryptography. On the other hand representation theory is useful in different areas of mathematics such as algebraic geometry and algebraic topology. Due to this wide range of applications, new algorithmic methods are being developed to study finite groups and their representations from a computational perspective. Recent developments in computer algebra systems and more specifically computational linear algebra, provide tools for developments in computational aspects of finite groups and their representations. The aim of this minisymposium is to gather experts in the area to discuss the recent achievements and potential new directions. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Construction and enumeration of finite groups The talk gives a short survey on the state-of-the-art of computational methods to construct or to enumerate finite groups of a given order. Linear Time Fourier Transforms of $S_{n-k}$-invariant Functions on the Symmetric Group $S_n$ Motivated by Kondor's spectral approach to the NP-complete quadratic assignment problem, this talk discusses new techniques for the efficient computation of discrete Fourier transforms (DFTs) of Sn-k-invariant functions on the symmetric group Sn. We uncover diamond- and leaf-rake-like structures in Young's seminormal and orthogonal representations leading to relatively expensive diamond and cheaper leaf-rake computations. These local computations constitute the basis of a reduction/induction process. We introduce a new anticipation technique that avoids diamond computations at the expense of only a small arithmetic overhead for leaf-rake computations. This results in local fast Fourier transforms (FFTs). Combining these local FFTs with a multiresolution scheme closely related to the inductive version of Young's branching rule we obtain a global FFT algorithm that computes the DFT of Sn-k-invariant functions on Sn in linear time. More precisely, we show that for fixed k and all n ≥ 2k DFTs of Sn-k-invariant functions on Sn can be computed in at most ck·[Sn:Sn-k] scalar multiplications and additions, where ck denotes a positive constant depending only on k. This run-time is order-optimal and improves Maslen's algorithm. Quadratic Probabilistic Algorithms for Normal Bases It is well known that for any finite Galois extension field K/F, with Galois group G = Gal(K/F), there exists an element α in K whose orbit G·α forms an F-basis of K. Such an element α is called normal and G·α is called a normal basis. In this talk we introduce a probabilistic algorithm for finding a normal element when G is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether a random element α in K is normal can be reduced to deciding whether Σσ in G σ(α)σ in K[G] is invertible. In an algebraic model, the cost of our algorithm is quadratic in the size of G for metacyclic G and slightly subquadratic for abelian G. This is a joint work with Mark Giesbrecht (UWaterloo) and Eric Schost (UWaterloo). |
| 10:00am - 12:00pm | MS172, part 2: Algebraic statistics |
| Unitobler, F-121 | |
|
|
10:00am - 12:00pm
Algebraic Statistics Algebraic statistics studies statistical models through the lens of algebra, geometry, and combinatorics. From model selection to inference, this interdisciplinary field has seen applications in a wide range of statistical procedures. This session will focus broadly on new developments in algebraic statistics, both on the theoretical side and the applied side. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Geometry of Exponential Graph Models When given network data, we can either compute descriptive statistics (degree distribution, diameter, clustering co- efficient, etc.) or we can find a model that explains the data. Modeling allow us to test hypotheses about edge formation, understand the uncertainty associated with the observed outcomes, and conduct inferences about whether the network substructures are more commonly observed than by chance. Modeling is also used for simulation and assessment of local effects. Exponential random graph models (ERGMs) are families of distributions defined by a set of network statistics and, thus, give rise to interesting graph theoretic questions. Our research focuses on the ERGMs where the edge, 2-path, and triangle counts are the sufficient statistics. These models are useful for modeling networks with a transitivity effect such as social networks. One of the most popular research questions for statisticians is the goodness-of-fit testing, how well does the model ”fit” the data? This is a difficult question for ERGMs. And one way to answer this question is to understand the reference set. Given an observed network G, the reference set of G is the set of simple graphs with the same edge, 2-path, and triangle counts as G. In algebraic geometry, it is called the fiber of G and are the 0-1 points on an algebraic variety, which we refer to as the reference variety. The goal of this paper is to understand reference variety through the lens of algebraic geometry. Moment Varieties of Measures on Polytopes This talk brings many areas together: discrete geometry, statistics, algebraic geometry, invariant theory, geometric modeling, symbolic and numerical computations. We study the algebraic relations among moments of uniform probability distributions on polytopes. This is already a non-trivial matter for quadrangles in the plane. In fact, we need to combine invariant theory of the affine group with numerical algebraic geometry to compute first relevant relations. Moreover, the numerator of the generating function of all moments of a fixed polytope is the adjoint of the polytope, which is known from geometric modeling. We prove the conjecture that the adjoint is the unique polynomial of minimal degree which vanishes on the non-faces of a simple polytope. This talk is based on joint work with Kristian Ranestad, Boris Shapiro and Bernd Sturmfels. The stratification of the maximum likelihood degree for toric varieties The lattice points of a lattice polytope give rise to a family of toric varieties when we allow complex coefficients in the monomial parametrization of the "usual" toric variety associated to the polytope. The maximum likelihood degree (ML degree) of any member of this family is at most the normalized volume of the polytope. The set of coefficient vectors associated to ML degrees smaller than the volume is parametrized by Gelfand-Kapranov-Zelevinsky's principal A-determinant. Not much is known about how the ML degree changes as one moves in the parameter space. We will discuss what we know starting with toric surfaces. Nested Determinantal Constraints in Linear Structural Equation Models Directed graphical models specify noisy functional relationships among a collection of random variables. In the Gaussian case, each such model corresponds to a semi-algebraic set of positive definite covariance matrices. The set is given via parametrization, and much work has gone into obtaining an implicit description in terms of polynomial (in-)equalities. Implicit descriptions shed light on problems such as parameter identification, model equivalence, and constraint-based statistical inference. For models given by directed acyclic graphs, which represent settings where all relevant variables are observed, there is a complete theory: All conditional independence relations can be found via graphical d-separation and are sufficient for an implicit description. The situation is far more complicated, however, when some of the variables are hidden. We consider models associated to mixed graphs that capture the effects of hidden variables through correlated error terms. The notion of trek separation explains when the covariance matrix in such a model has submatrices of low rank and generalizes d-separation. However, in many cases, such as the infamous Verma graph, the polynomials defining the graphical model are not determinantal, and hence cannot be explained by d-separation or trek-separation. We show that these constraints often correspond to the vanishing of nested determinants and can be graphically explained by a notion of restricted trek separation. |
| 10:00am - 12:00pm | MS134, part 3: Coding theory and cryptography |
| Unitobler, F-122 | |
|
|
10:00am - 12:00pm
Coding theory and cryptography The focus of this proposal is on coding theory and cryptography, with emphasis on the algebraic aspects of these two research fields.Error-correcting codes are mathematical objects that allow reliable communications over noisy/lossy/adversarial channels. Constructing good codes and designing efficient decoding algorithms for them often reduces to solving algebra problems, such as counting rational points on curves, solving equations, and classifying finite rings and modules. Cryptosystems can be roughly defined as functions that are easy to evaluate, but whose inverse is difficult to compute in practice. These functions are in general constructed using algebraic objects and tools, such as polynomials, algebraic varieties, and groups. The security of the resulting cryptosystem heavily relies on the mathematical properties of these. The sessions we propose feature experts of algebraic methods in coding theory and cryptography. All levels of experience are represented, from junior to very experienced researchers. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Linear Complementary Pair of Codes and Some Results on Boolean Functions This talk consists of two parts. In the first part we explain some constructions on linear complementary pair of codes and some applications to cryptography. In the second part we present some recent results on constructions and on counting certain functions related to Boolean functions. This talk presents some joint works including a number of colleagues, who will be cited in the talk. Optimal Locally Recoverable Codes via Chebotarev Density Theorem We provide a Galois theoretical framework which allows to produce good polynomials for the Tamo and Barg construction of optimal locally recoverable codes (LRC). Our approach allows to prove existence results and to construct new good polynomials, which in turn allows to build new LRCs. The existing theory of good polynomials fits in our new framework. Explicit optimal-length locally repairable codes of small distances Locally repairable codes (LRCs) have received significant recent attention as a method of designing data storage systems robust to server failure. Optimal LRCs offer the ideal trade-off between minimum distance and locality, a measure of the cost of repairing a single codeword symbol. For optimal LRCs with minimum distance greater than or equal to 5, block length is bounded by a polynomial function of alphabet size. In this talk, we give explicit constructions of optimal length (in terms of alphabet size), optimal LRCs with small minimum distances. Fast Computation of the Roots of Polynomials Over the Ring of Power Series We give an algorithm for computing all roots of polynomials over a univariate power series ring over a field K. Given a precision d and a polynomial Q whose coefficients are power series in x, the algorithm computes a representation of all power series f(x) such that Q(f(x)) = 0 mod x^d. The algorithm works unconditionally, in particular also with multiple roots, where Newton iteration fails. The cost bound for our algorithm matches the worst-case input and output size d deg(Q), up to logarithmic factors. This improves upon previous algorithms which were quadratic in at least one of d and deg(Q). Our algorithm is a refinement of a divide-and-conquer algorithm by Alekhnovich (2005), where the cost of recursive steps is better controlled via the computation of a factor of Q which has a smaller degree while preserving the roots. |
| 10:00am - 12:00pm | MS145, part 1: Isogenies in Cryptography |
| Unitobler, F-123 | |
|
|
10:00am - 12:00pm
Isogenies in Cryptography The isogeny graph of elliptic curves over finite fields has long been a subject of study in algebraic geometry and number theory. During the past 10 years several authors have shown multiple applications in cryptology. One interesting feature is that systems built on isogenies seem to resist attacks by quantum computers, making them the most recent family of cryptosystems studied in post-quantum cryptography. This mini-symposium brings together presentations on cryptosystems built on top of isogenies, their use in applications, and different approaches to the cryptanalysis, including quantum cryptanalysis. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Overview of isogenies in cryptography (Part I) We will give an introductory overview of the current landscape in isogeny-based cryptography, including SIDH/SIKE and CSIDH. We will then summarise the latest developments and present some open problems. Overview of isogenies in cryptography (Part II) We will give an introductory overview of the current landscape in isogeny-based cryptography, including SIDH/SIKE and CSIDH. We will then summarise the latest developments and present some open problems. Quantum attacks against isogenies Childs, Jao, and Soukharev introduced a subexponential quantum attack against the original isogeny-based cryptosystem from Couveignes, Rostovtsev, and Stolbunov. The attack uses a subexponential quantum algorithm introduced by Kuperberg to find hidden shifts. This talk will (1) introduce the hidden-shift problem and the isogeny problem, (2) survey the attack algorithms, and (3) summarize the latest analyses of the costs of attacking CSIDH. This includes joint work with Lange, Martindale, and Panny (https://quantum.isogenies.org). Pre- and post-quantum Diffie-Hellman From a mathematical and algorithmic point of view, one of the nice features of commutative isogeny-based cryptosystems (such as CSIDH) is that they are governed by particularly simple algebraic structures, namely commutative groups acting on sets. On a strictly formal level, this allows us to draw strong analogies with classical Diffie-Hellman and discrete-logarithm-based cryptosystems, problems, and algorithms. In this talk we will explore these analogies and their limitations, and consider the relationships between the "hard" problems underlying commutative isogeny-based cryptosystems in both the pre- and post-quantum settings. |
| 1:30pm - 2:30pm | IP04: Helmut Pottman: Applications of sphere geometries in computational design |
| vonRoll, Fabrikstr. 6, 001 | |
|
|
1:30pm - 2:30pm
Applications of sphere geometries in computational design KAUST, Saudi Arabia The classical sphere geometries of Möbius, Laguerre and Lie provide a rich source of knowledge which can be highly useful in the solution of problems in computational design. We will demonstrate this at hand of three application scenarios which also exhibit a relation to algebraic geometry: (i) Rational curves and surfaces with rational offsets possess various applications in Computer-Aided Manufacturing. Their study and design can be based on Laguerre geometry, where they appear as unconstrained rational curves or surfaces in the so-called isotropic model. (ii) The most elegant discrete versions of principal curvature parameterizations of surfaces are objects of sphere geometries and they form the basis for the construction of smooth surfaces from low degree algebraic patches. (iii) The design of various types of circle patterns on surfaces can be effectively based on sphere geometric models. These patterns only exist on those surfaces which carry at least two families of circles. Their complete classification is a problem of algebraic geometry which has been recently solved by R. Krasauskas and M. Skopenkov. |
| 1:30pm - 2:30pm | IP04-streamed from 001: Helmut Pottman: Applications of sphere geometries in computational design |
| vonRoll, Fabrikstr. 6, 004 | |
| 2:30pm - 3:00pm | Coffee break |
| Unitobler, F wing, floors 0 and -1 | |
| 3:00pm - 5:00pm | MS165, part 2: Multiparameter persistence: algebra, algorithms, and applications |
| Unitobler, F006 | |
|
|
3:00pm - 5:00pm
Multiparameter persistence: algebra, algorithms, and applications Multiparameter persistent homology is an area of applied algebraic topology that studies topological spaces, often arising from complex data, simultaneously indexed by multiple parameters. In the usual setting, persistent homology studies a single-parameter filtration associated with a topological space. The homology of such a filtration is a persistence module, which can be conveniently described by its barcode decomposition. In many applications, however, a single-parameter filtration is not adequate to encode the structures of interest in complex data; two or more filtrations may be required. Multiparameter persistence studies the homology of spaces equipped with multiple filtrations. The homological invariants of these spaces are far more complicated than in the single-parameter setting, requiring new algebraic, computational, and statistical techniques. This work has deep connections to representation theory and commutative algebra, with compelling applications to data analysis. Recent years have seen considerable advances in multiparameter persistent homology, including algorithms for working with large multiparameter persistence modules, software for computing and visualizing invariants, statistical techniques, and applications. This minisymposium will highlight recent work in multiparameter persistence. Talks will include including theoretical results, algorithmic advances, and applications to data analysis. As many important questions remain to be answered in order to advance the theory and to increase the applicability of multiparameter persistence, this minisymposium seeks to cultivate discussion and collaboration that will lead to new results in the practical use of multiparameter persistent homology. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Algebraic distances for persistent homology One of the main ideas in Topological Data Analysis is to convert application data into an algebraic object called a persistence module and to calculate distances between such modules. I will introduce these constructions and describe the main examples of such distances, called Wasserstein distances. The weakest of these distances, called the bottleneck distance, has previously been described algebraically (called interleaving distance). This has been useful in theory and in applications. I will give an algebraic description of all of the Wasserstein distances and discuss their generalizations to multiparameter persistence. This is joint work with Jonathan Scott and Don Stanley. Multiparameter persistence landscapes An important problem in the field of Topological Data Analysis is defining topological summaries which can be combined with traditional data analytic tools. For single parameter persistence modules Bubenik introduced the persistence landscape, a stable representation of persistence diagrams amenable to statistical analysis and machine learning tools. In this talk we generalise the persistence landscape to multiparameter persistence modules providing a stable representation of the rank invariant. We show that multiparameter persistence landscapes are stable with respect to the interleaving distance and persistence weighted Wasserstein distance. Moreover the multiparameter landscapes enjoy many more desirable properties: the collection of multiparameter landscapes associated to a module are interpretable, computable, amenable to statistical analysis, and faithfully represent the rank invariant. We shall provide example calculations to demonstrate potential applications and how one can interpret the multiparameter landscapes associated to a multiparameter module. Geometric perspectives on multiparameter persistence Using ideas inspired from geometric and differential topology, we introduce a version of multiparameter persistence, which combines sub-level and zig-zag persistence. Our construction arises from one-parameter families of smooth functions on compact manifolds. We show how to analyse this version of multiparameter persistence in geometric terms with several examples. Furthermore, we focus on practical aspects of this theory, with an emphasis on visualization and potential algorithm development. This is joint work with Peter Bubenik. Persistent homology of noise I will describe a sequence of fairly naive experiments and small observations, towards a characterization of the persistent homology of noise. This should be viewed as an attempt to quantify what it means for a bar to be "short", vs. "long" or "interesting". |
| 3:00pm - 5:00pm | MS200, part 2: From algebraic geometry to geometric topology: Crossroads on applications |
| Unitobler, F007 | |
|
|
3:00pm - 5:00pm
From algebraic geometry to geometric topology: crossroads on applications The purpose of the Minisymposium "From Algebraic Geometry to Geometric Topology: Crossroads on Applications" is to bring together researchers who use algebraic, combinatorial and geometric topology in industrial and applied mathematics. These methods have already seen applications in: biology, physics, chemistry, fluid dynamics, distributed computing, robotics, neural networks and data analysis. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Privileged topologies of self-assembling molecular knots The self-assembly of objects with a set of desired properties is a major goal of material science and physics. A particularly challenging problem is that of self-assembling structures with a target topology. Here we show by computer simulation that one may design the geometry of string-like rigid patchy templates to promote their efficient and reproducible self-assembly into a selected repertoire of non-planar closed folds including several knots. In particular, by controlling the template geometry, we can direct the assembly process so as to strongly favour the formation of constructs tied in trefoil or pentafoil, or even of more exotic knot types. A systematic survey reveals that these "privileged", addresable topologies are rare, as they account for only a minute fraction of the simplest knot types. This knot discovery strategy has recently allowed for predicting complex target topologies [1,2,3], some of which have been realized experimentally [4,5].
References
1) G. Polles et al. "Self-assembling knots of controlled topology by designing the geometry of patchy templates", Nature Communications, 2015
self-assembly video demonstration available at this link
2) G. Polles et al. "Optimal self-assembly of linked constructs and catenanes via spatial confinement", Macro Letters (2016)
3) M. Marenda, et al. “Discovering privileged topologies of molecular knots with self-assembling models", Nature Communications, 2018
4) J. Danon et al. "Braiding a molecular knot with eight crossings.", Science (2017)
5) Kim et al. "Coordination-driven self-assembly of a molecular knot comprising sixteen crossings", Angew. Chem. Int. Ed. (2018)
Why are there knots in proteins? There are now more than 1700 protein chains that are known to contain some type of topological knot in their polypeptide chains in the protein structure databank. Although this number is small relative to the total number of protein structures solved, it is remarkably high given the fact that for decades it was thought impossible for a protein chain to fold and thread in such a way as to create a knotted structure. There are four different types of knotted protein structures that contain 31, 41, 52 and 61 knots and over the past 15 years there has been an increasing number of experimental and computational studies on these systems. The folding pathways of knotted proteins have been studied in some detail, however, the focus of this talk is to address the fundamental question “Why are there knots in proteins?” It is known that once formed, knotted protein structures tend to be conserved by nature. This, in addition to the fact that, at least for some deeply knotted proteins, their folding rates are slow compared with many unknotted proteins, has led to the hypothesis that there are some properties of knotted proteins that are different from unknotted ones, and that this had resulted in some evolutionary advantage over faster folding unknotted structures. In this talk, I will review the evidence for and against this theory. In particular, how a knot within a protein chain may affect the thermodynamic, kinetic, mechanical and cellular (resistance to degradation) stability of the protein will be discussed. The study of 2-stratifolds as models for applications (Part 1) In physics, the morphological structure of granular samples in mechanical equilibrium has been modeled by graphs and analyzed by using the first Betti number (S. Ardanza-Trevijano et al.). in TDA, persistent homology is used to study graphs arising from sampling point clouds. Graphs can be viewed as 1-dimensional stratified spaces and possibly more information could be obtained by modeling with 2-dimensional stratified spaces, since these provide more topological invariants. For example, in Physics, the study of singularities of soap films (E. Goldstein et al.) and in Chemistry and Biology, the study of cyclo-octane energy landscapes (S. Martin et al.) led to 2-dimensional complexes that consist of unions of 2-manifolds intersecting along a curve. These 2-complexes are special cases of 2-dimensional stratified spaces. In TDA, techniques have been developed (Bendich et al.) for organizing, visualizing and analyzing point cloud data that has been sampled from or near a 2-dimensional stratified space. There is no topological classification of these spaces. A systematic study of 2-dimensional stratifiedspaces without boundary curves or 0-dimensional singularities, the 2-stratifolds, was begun by W. Heil, F.J. González-Acuña and the speaker. In this talk, we wil explore 2-stratifolds with trivlal fundamental group. The study of 2-stratifolds as models for applications (Part 2) In physics, the morphological structure of granular samples in mechanical equilibrium has been modeled by graphs and analyzed by using the first Betti number (S. Ardanza-Trevijano et al.). in TDA, persistent homology is used to study graphs arising from sampling point clouds. Graphs can be viewed as 1-dimensional stratified spaces and possibly more information could be obtained by modeling with 2-dimensional stratified spaces, since these provide more topological invariants. For example, in Physics, the study of singularities of soap films (E. Goldstein et al.) and in Chemistry and Biology, the study of cyclo-octane energy landscapes (S. Martin et al.) led to 2-dimensional complexes that consist of unions of 2-manifolds intersecting along a curve. These 2-complexes are special cases of 2-dimensional stratified spaces. In TDA, techniques have been developed (Bendich et al.) for organizing, visualizing and analyzing point cloud data that has been sampled from or near a 2-dimensional stratified space. There is no topological classification of these spaces. A systematic study of 2-dimensional stratified spaces without boundary curves or 0-dimensional singularities, the 2-stratifolds, was begun by J.C. Gómez-Larrañaga, F.J. González-Acuña and the speaker. In this talk, we will describe an efficient algorithm on the labeled graph of a 2-stratifold that determines its homotopy type and an efficient algorithm that determines if its fundamental group is infinite cyclic. Also, we will discuss embeddings of 2-stratifolds as 3-manifold spines and talk about the solvability of the word problem for 2-stratifold groups. |
| 3:00pm - 5:00pm | MS199, part 1: Applications of topology in neuroscience |
| Unitobler, F011 | |
|
|
3:00pm - 5:00pm
Applications of topology in neuroscience Research at the interface of topology and neuroscience is growing rapidly and has produced many remarkable results in the past five years. In this minisymposium, speakers will present a wide and exciting array of current applications of topology in neuroscience, including classification and synthesis of neuron morphologies, analysis of synaptic plasticity, and diagnosis of traumatic brain injuries. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Understanding neuronal shapes with algebraic topology The morphological diversity of neurons supports the complex information-processing capabilities of biological neuronal networks. A major challenge in neuroscience has been to reliably describe neuronal shapes with universal morphometrics that generalize across cell types and species. Inspired by algebraic topology, we have conceived a topological descriptor of trees that couples the topology of their complex arborization with the geometric features of its structure, retaining more information than traditional morphometrics. The topological morphology descriptor (TMD) has proved to be very powerful in categorizing neurons into concrete groups based on morphological grounds. The TMD algorithm has lead to the discovery of two distinct classes of pyramidal cells in the human cortex, and the identification of robust groups for rodent cortical neurons. Computing homotopy types of directed flag complexes I will present some techniques from elementary algebraic topology that can in some cases be used to completely determine the homotopy types of directed flag complexes (or more generally, ordered simplicial complexes). These involve simplicial collapses, heuristic homology computations and coning operations. Then I will explain how to use these techniques to classify the homotopy types for certain large families of tournaments. Finally, I will show how to compute the homotopy type for the C. Elegans connectome. Applications of persistent homology to stroke therapy Stroke has recently been called “the epidemic of the 21st century." Today, 1.5 million new strokes occur each year in Europe alone, and this number is expected to increase by a factor of 1.5 to 2 by the year 2050. Despite advances in treatment, only a small proportion of patients recover enough to re-enter normal life. The brain is a highly networked organ; accordingly, many brain diseases, including stroke, are increasingly understood as network disorders. Stroke lesions cause impairment by disabling nearby nodes and edges in the structural network, which in turn affects the functional network and the corresponding behavior. Likewise, it is hypothesized that recovery from stroke can be expressed in terms of reorganization of both functional and structural networks. The use of graph theory-based metrics to study brain networks is well established. Some metrics, such as degree or betweenness centrality, capture local characteristics; others, such as connection density or the small-world index, capture global characteristics of a network. In recent years, algebraic topology has become increasingly prominent for its ability to integrate local network characteristics to a global notion of shape. We will present evidence that in vivo structural brain networks possess significantly more cavities than random networks with the same degree distribution, building on in silico evidence from Hess et al that information flow is organized by topological invariants. We will also attempt to use persistent homology to distinguish between two groups of patients: those who, within 3 months poststroke, recover roughly 70% of lost motor function (called “fitters") and those who do not (called “non-fitters"). Neural decoding using TDA Neural decoding is the process of determining which stimuli are driving the activity of neurons. For instance, head direction neurons fire depending on which direction the animal is looking. Determining this relationship, however, can be a tedious proces, where the researcher would have to track and process all kinds of stimuli that might be relevant for the neural activity. |
| 3:00pm - 5:00pm | MS160, part 2: Numerical methods for structured polynomial system solving |
| Unitobler, F012 | |
|
|
3:00pm - 5:00pm
Numerical methods for structured polynomial system solving Improvements in the understanding of numerical methods for dense polynomial system solving led to the complete solution of Smale's 17th problem. At this point, it remains an open challenge to achieve the same success in the solution of structured polynomial systems: explain the typical behavior of current algorithms and devise polynomial-time algorithms for computing roots of polynomial systems. In this minisymposium, researchers will present the current progress on applying numerical methods to structured polynomial systems. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Polyhedral Real Homotopy Continuation We design a homotopy continuation algoritm to find real roots of sparse polynomial systems based on numerically tracking a well-known geometric deformation process called Viro’s patchworking. The main advantage of our algorithm is that it entirely operates over the field of real numbers and tracks the optimal number of solution paths. The price for this property is that the algorithm is not guaranteed to work universally, but requires to solve and track polynomial systems located in an unbounded component of the complement of the A-discriminant. We provide a relative entropy programming based relaxation to certify this requirement on the input data. Root counts of structured algebraic systems We consider bounds on the number of complex roots of well-constrained algebraic systems, namely mixed volume and multi-homogeneous Bezout bound. We relate these bounds to permanent expressions, generalizing this relationship to the case of systems whose Newton polytopes are products of arbitrary polytopes in complementary subspaces. This improves the computational complexity of determining the bounds. We apply the bounds to obtaining new counts on the number of complex embeddings of minimally rigid graphs in the plane, in space, and on the sphere. We relate the bounds to certain combinatorial properties of the graphs such as the number of ways to orient the edges according to certain rules. We focus on bounds for semi-mixed algebraic systems where equations are partitioned to subsets with common Newton polytopes. This is applied to counting the number of totally mixed Nash equilibria in games of several players. A local complexity theory I will describe advances on an on-going project of establishing a local complexity theory. This is inspired on the concept of smoothed analysis introduced some years ago by Spielman and Teng, which takes as cost the supremum of the average on small balls around points. Here we are interested in styuding directly the average on small balls around points, without considering the supremum on the whole space. We believe that this would be a more accurate notion of cost for practical problems than the smoothed complexity. Our first analysis studies local complexity for real conic condition numbers, under uniform and Gaussian distributions.
Low-degree approximation of real singularities In this talk I will discuss some recent results that allow to approximate a real singularity given by polynomial equations of degree d (e.g. the zero set of a polynomial, or the number of its critical points of a given Morse index) with a singularity which is diffeomorphic to the original one, but it is given by polynomials of degree O(d^{1/2} log d). The approximation procedure is constructive (in the sense that one can read the approximating polynomial from a linear projection of the given one) and quantitative (in the sense that the approximating procedure will hold for a subset of the space of polynomials with measure increasing very quickly to full measure as the degree goes to infinity). I will also discuss the potential of this procedure for improving the average complexity of some algorithms. This is based on a combination of joint works with P. Breiding, D. N. Diatta and H. Keneshlou. |
| 3:00pm - 5:00pm | MS167, part 1: Computational tropical geometry |
| Unitobler, F013 | |
|
|
3:00pm - 5:00pm
Computational tropical geometry This session will highlight recent advances in tropical geometry, algebra, and combinatorics, focusing on computational aspects and applications. The area enjoys close interactions with max-plus algebra, polyhedral geometry, combinatorics, Groebner theory, and numerical algebraic geometry. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) The tropical geometry of shortest paths We study parameterized versions of classical algorithms for computing shortest paths. This is most easily expressed in terms of tropical geometry. Applications include the enumeration of polytropes, i.e., ordinary convex polytopes which are also tropically convex, as well as shortest paths in traffic networks with variable link travel times. Tropicalization of semialgebraic sets arising in convex optimization Linear programming (LP) is the simplest and most studied class of conic optimization problems. It consists in minimizing a linear function over a convex cone that is polyhedral. Important generalizations of LP include semidefine and hyperbolic programming, in which we allow the underlying cone to have a more complicated structure. In the case of semidefinite programming, the cone is defined by linear matrix inequalities, while a hyperbolicity cone is defined by imposing a positivity condition on the eigenvalues of a hyperbolic polynomial. In all three cases, the underlying cones are semialgebraic, which implies that we can study them over arbitrary real closed fields, such as the nonarchimedean field of real Puiseux series. The tropicalization of such cone is then defined as its image under the nonarchimedean valuation. In this talk, we discuss the structure of these tropicalizations and the related computational problems. In particular, we study how the structure of tropical spectrahedral cones is more restrictive in comparison to the structure of arbitrary tropical convex cones. To obtain these results, we study the structure of arbitrary tropical semialgebraic sets. We also show how tropical convex cones encode stochastic mean payoff games and how this can be used, in the case of generic tropical spectrahedra, to solve the associated feasibility problems. Linear algebra and convexity over symmetrized semirings, hyperfields and systems Rowen introduced a notion of algebraic structure, called systems, which unifies symmetrized tropical semirings, supertropical semirings, and hyperfields. We study linear algebra and convexity over systems. We identify cases in which the row rank, column rank, and submatrix rank of a matrix are equal. We also discuss Helly and Carathéodory numbers. Linear algebra and convexity over symmetrized semirings, hyperfields and systems. Priority mechanisms are a key element of the management of emergency call centers. These mechanisms can be modeled by dynamical systems with a piecewise affine transition mapping, determined by a rational map in the tropical semifield. Performance indicators can be inferred from stationary regimes. The latter are determined by solving structured tropical polynomial systems: these are analogous to the non-linear eigenproblems associated to Markov decision processes, but priority rules lead to negative "probabilities". |
| 3:00pm - 5:00pm | MS191, part 2: Algebraic and geometric methods in optimization. |
| Unitobler, F021 | |
|
|
3:00pm - 5:00pm
Algebraic and geometric methods in optimization. Recently advanced techniques from algebra and geometry have been used to prove remarkable results in Optimization. Some examples of the techniques used are polynomial algebra for non-convex polynomial optimization problems, combinatorial tools like Helly's theorem from combinatorial geometry to analyze and solve stochastic programs through sampling, and using ideal bases to find optimality certificates. Test-set augmentation algorithms for integer programming involving Graver sets for block-structured integer programs, come from concepts in commutative algebra. In this sessions experts will present a wide range of results that illustrate the power of the above mentioned methods and their connections to applied algebra and geometry. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Convergence analysis of measure-based bounds for polynomial optimization on the box, ball and sphere We investigate the convergence rate of a hierarchy of measure-based upper bounds introduced by Lasserre (2011) for the minimization of a polynomial f over a compact set K. These bounds are obtained by searching for a degree 2r sum-of-squares density function h minimizing the expected value of f over K with respect to a given reference measure supported by K. For simple sets like the box [-1,1]^n, the unit ball and the unit sphere (and natural reference measures including the Lebesgue measure), we show that the convergence rate to the global minimum of f is in O(1/r^2) and that this bound is tight for the minimization of linear polynomials. Our analysis relies on an eigenvalue reformulation of the bounds and links to extremal roots of orthogonal polynomials, and the tightness result exploits a link to cubature rules. This is based on joint work with Etienne de Klerk and Lucas Slot. Dynamic programming algorithms for integer programming In this talk, I survey recent progress on the complexity of integer programming in the setting in which it lends itself to dynamic programming approaches. Some of the results are tight under the exponential time hypothesis (ETH). I will also mention open problems. For example, tight results for explicit upper bounds on the variables are however not yet known. The talk is based on joint work with Robert Weismantel. The support of integer optimal solutions The support of a vector is the number of nonzero-components. We show that given an integral m x n matrix A, the integer linear optimization problem max{ c^Tx : Ax = b, x>=0, x in Z^n } has an optimal solution whose support is bounded by 2m log(2 sqrt(m) ||A||), where ||A|| is the largest absolute value of an entry of A. Compared to previous bounds, the one presented here is independent on the objective function. We furthermore provide a nearly matching asymptotic lower bound on the support of optimal solutions. New Fourier interpolation formulas and optimization in Euclidean space Recently we have proven that a radial Schwartz function can be uniquely reconstructed from a certain discrete set of it's values and values of its Fourier transform. Being an interesting phenomenon on its own this interpolation formula allowed us to obtain sharp linear programming bounds in dimensions 8 and 24 and to prove universal optimality of E8 and Leech lattices. This is joint work with H. Cohn, A. Kumar, Stephen D. Miller, and Danylo Radchenko. |
| 3:00pm - 5:00pm | MS195, part 2: Algebraic methods for convex sets |
| Unitobler, F022 | |
|
|
3:00pm - 5:00pm
Algebraic methods for convex sets Convex relaxations are extensively used to solve intractable optimization instances in a wide range of applications. For example, convex relaxations are prominently utilized to find solutions of combinatorial problems that are computationally hard. In addition, convexity-based regularization functions are employed in (potentially ill-posed) inverse problems, e.g., regression, to impose certain desirable structure on the solution. In this mini-symposium, we discuss the use of convex relaxations and the study of convex sets from an algebraic perspective. In particular, the goal of this minisymposium is to bring together experts from algebraic geometry (real and classical), commutative algebra, optimization, statistics, functional analysis and control theory, as well as discrete geometry to discuss recent connections and discoveries at the interfaces of these fields. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Determinantal representations of stable and hyperbolic polynomials Positive self-adjoint determinantal representations of homogeneous hyperbolic polynomials certify the hyperbolicity and represent the corresponding hyperbolicity cone as a spectrahedron; they play therefore a key role in convex algebraic geometry. I will talk both about them and about their (non homogeneous) complex cousins --- complex polynomials that are stable with respect to the unit polydisc or the product of upper halfplanes, and the determinantal representations thereof that certify the corresponding stability property Noncommutative polynomials describing convex sets In their 2012 Annals paper, Helton and McCullough proved that every convex semialgebraic matrix set is described by a linear matrix inequality (LMI). In this talk we first prove that every irreducible noncommutative polynomial $f$ with convex semialgebraic set $D_f = {X: f(X)succ0 }$ must be of degree at most 2 and concave. Furthermore, for a matrix of noncommutative polynomials $F$ we present effective algorithms for checking whether $D_F$ is convex and finding an LMI representation for convex $D_F$. The derivation of these algorithms yields additional features of convex matrix sets that have no counterparts in the commutative theory. Techniques employed include realization theory, noncommutative algebra and semidefinite programming. Semidefinite Programming and Nash Equilibria in Bimatrix Games We explore the power of semidefinite programming (SDP) for finding additive e-approximate Nash equilibria in bimatrix games. We introduce an SDP relaxation for a quadratic programming formulation of the Nash equilibrium (NE) problem and provide a number of valid inequalities to improve the quality of the relaxation. If a rank-1 solution to this SDP is found, then an exact NE can be recovered. We show that for a strictly competitive game, our SDP is guaranteed to return a rank-1 solution. We propose two algorithms based on iterative linearization of smooth nonconvex objective functions whose global minima by design coincide with rank-1 solutions. Empirically, we demonstrate that these algorithms often recover solutions of rank at most two and e close to zero. Furthermore, we prove that if a rank-2 solution to our SDP is found, then a 5/11-NE can be recovered for any game, or a 1/3-NE for a symmetric game. We then show how our SDP approach can address two (NP-hard) problems of economic interest: finding the maximum welfare achievable under any NE, and testing whether there exists a NE where a particular set of strategies is not played. Finally, we show the connection between our SDP and the first level of the Lasserre/sum of squares hierarchy. Low Rank Tensor Methods in High Dimensional Data Analysis Large amount of multidimensional data in the form of multilinear arrays, or tensors, arise routinely in modern applications from such diverse fields as chemometrics, genomics, physics, psychology, and signal processing among many others. At the moment, our ability to generate and acquire them has far outpaced our ability to effectively extract useful information from them. There is a clear demand to develop novel statistical methods, efficient computational algorithms, and fundamental mathematical theory to analyze and exploit information in these types of data. In this talk, I will review some of the recent progresses and discuss some of the present challenges. |
| 3:00pm - 5:00pm | MS187, part 2: Signature tensors of paths |
| Unitobler, F023 | |
|
|
3:00pm - 5:00pm
Signature tensors of paths Given a path X in R^n, it is possible to naturally associate an infinite list of tensors, called the iterated-integral signature of X. These tensors were introduced in the 1950s by Kuo-Tsai Chen, who proved that every (smooth enough) path is uniquely determined by its signature. Over the years this topic became central in control theory, stochastic analysis and, lately, in time series analysis. In applications the following inverse problem appears: given a finite collection of tensors, can we find a path that yields them as its signature? One usually introduces additional requirements, like minimal length, or a parameterized class of functions (say, piecewise linear). It then becomes crucial to know when there are only finitely many paths having a given signature that satisfy the constraints. This problem, called identifiability, can be tackled with an algebraic-geometric approach. On the other hand, by fixing a class of paths (polynomial, piecewise linear, lattice paths, ..), one can look at the variety carved out by the signatures of those paths inside the tensor algebra. Besides identifiability, the geometry of these signature varieties can give a lot of information on paths of that class. One important class is that of rough paths. Apart from applications to stochastic analysis, its signature variety has a strong geometric significance and it exhibits surprising similarities with the classical Veronese variety. In time series analysis, it is often necessary to extract features that are invariant under some group action of the ambient space. The signature of iterated signals is a general way of feature extraction; one can think of it as a kind of nonlinear Fourier transform. Understanding its invariant elements relates to classical invariant theory but poses new algebraic questions owing to the particularities of iterated integrals. Recent developments in these aspects will be explored in this minisymposium. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Invariants of the iterated-integral signature Recently the iterated-integral signature, known from stochastic analysis, has found applications in statistics and machine learning as a method for extracting features of time series. In many situations, there is a group acting on data that one wants to "mod out". One example is the accelerator data coming from a mobile phone. The orientation of the phone in a user's pocket is unknown. One then usually tries to calculate features that are invariant to the action of SO(3). I describe how such invariant features can be found in the signature. This is joint work with Jeremy Reizenstein (University of Warwick). The areas of areas problem When introducing the iterated-integral signature of a path, we often give the following fact as an illuminating example: The information that level 2 adds beyond the total displacement given by level 1, is the "signed area" of each two-dimensional projection of the path. Given any two one-dimensional paths on the same interval, we can construct another as the cumulative signed area. It is natural to ask about all the paths we can get starting with all projections of a path and iteratively taking signed area. In particular the collection of their final values, which we call the areas-of-areas. What signature elements do they correspond to? Do they contain the same information as the signature (yes). How might we find a minimum subset of them which we can take which determine the signature? Persistence paths and signature features in topological data analysis Persistent homology is a tool used to analyse topological features of data. In this talk, I will describe a new feature map for barcodes that arise in persistent homology computation. The main idea is to first realize each barcode as a path in a convenient vector space, and to then compute its path signature which takes values in the tensor algebra of that vector space. The composition of these two operations — barcode to path, path to tensor series — results in a feature map that has several desirable properties for statistical learning, such as universality and characteristicness, and achieves high performance on several classification benchmarks. Character groups of Hopf algebras and their applications Character groups of Hopf algebras arise naturally in a variety of applications. For example, they appear in numerical analysis, control theory and the theory of rough paths and stochastic analysis. In the talk we will review the geometry and some main examples of these (infinite-dimensional) groups. Then we will report on some recent progress for character groups associated to so called combinatorial Hopf algebras. In the combinatorial setting, certain subgroups of the character group are closely connected to locally convergent Taylor series like expansions which are of interest in the applications mentioned above. |
| 3:00pm - 5:00pm | MS183, part 1: Polyhedral geometry methods for biochemical reaction networks |
| Unitobler, F-105 | |
|
|
3:00pm - 5:00pm
Polyhedral geometry methods for biochemical reaction networks This minisymposium focuses on geometric objects arising in the study of parametrized polynomial ODEs given by biochemical reaction networks. In particular, we consider recent work that employs techniques from convex, polyhedral, and tropical geometry in order to extract properties of interest from the ODE system and to relate them to the choice of parameter values. Specific problems covered in the minisimposium include the analysis of forward-invariant regions of the ODE system, the determination of parameter regions for multistationarity or oscillations, the performance of model reduction close to metastable regimes, and the characterization of unique existence of equilibria using oriented matroids. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Endotactic Networks and Toric Differential Inclusions An important dynamical property of biological interaction network models is persistence, which intuitively means that “no species goes extinct”. It has been conjectured that weakly reversible networks are persistent. The property of persistence is related to yet another conjecture called the Global Attractor Conjecture. Recently, Craciun has proposed a proof of the Global Attractor Conjecture. An important step in this proof is the embedding of weakly reversible dynamical systems into toric differential inclusions. We show that dynamical systems generated by the larger class of endotactic networks can be embedded into toric differential inclusions. Approximating Convex Hulls of Curves by Polytopes We study the convex hulls of trajectories of polynomial dynamical systems. Such trajec- tories also include real algebraic curves. The boundaries of the resulting convex bodies are stratified into families of faces. We approximate these convex hulls by a family of polytopes. We present numerical algorithms to identify the patches of the convex hull by classifying the facets of the polytope. An implementation based on the software Bensolve Tools is given. This is based an a joint work with Daniel Ciripoi, Andreas Lóhne and Bernd Sturmfels. Multistationarity conditions in a network motif describing ERK activation ERK is an important signaling molecule that is activated by phosphorylation at two binding sites. In theory phosphorylation is either distributive or processive. It has been shown however that ERK phosphorylation is neither purely distributive nor purely processive but rather a mixture of both. While purely distributive processes are known to be multistationary, processive are not. We study a network incorporating both mechanisms. By varying certain rate constants the contribution of the distributive mechanism can be controlled. As rate constants are hard to determine experimentally this network gives rise to a parametrized family of polynomials. In this context multistationarity refers to the existence of rate constants such that the polynomials have at least two positive solutions. Multistationarity is considered an important feature of this network and we want to understand the contribution of the distributive mechanism to the occurrence of multistationarity. The corresponding variety admits a monomial parameterization and the family belongs to the class of systems described with Feliu, Mincheva and Wiuf. Thus multistationarity can be decided by studying the sign of the determinant of the Jacobian evaluated at this parameterization. We establish multistationarity and study whether multistationarity persists as the contribution from the distributive mechanism goes to zero. Oscillations in a mixed phosphorylation mechanism We will discuss the existence of oscillations in a phosphorylation mechanism where the phosphorylation is processive and the dephosphorylation is distributive. We show that in the three-dimensional space of total amounts, the border between systems with a stable versus unstable steady state is a surface that consists of points of Hopf bifurcations. The emergence of oscillations via a Hopf bifurcation is enabled by the catalytic and association constants of the distributive part of the mechanism: if these rate constants satisfy two inequalities, then the system admits a Hopf bifurcation.
This is a joint work with C. Conradi and A. Shiu. |
| 3:00pm - 5:00pm | MS154, part 2: New developments in matroid theory |
| Unitobler, F-106 | |
|
|
3:00pm - 5:00pm
New developments in matroid theory The interactions between Matroid Theory, Algebra, Geometry, and Topology have long been deep and fruitful. Pertinent examples of such interactions include breakthrough results such as the g-Theorem of Billera, Lee and Stanley (1979); the proof that complements of finite complex reflection arrangements are aspherical by Bessis (2014); and, very recently, the proof of Rota's log-concavity conjecture by Adiprasito, Huh, and Katz (2015). The proposed mini-symposia will focus on the new exciting development in Matroid Theory such as the role played by Bergman fans in tropical geometry, several results on matroids over a commutative ring and over an hyperfield, and the new improvement in valuated matroids and about toric arrangements. We plan to bring together researchers with diverse expertise, mostly from Europe but also from US and Japan. We are going to include a number of postdocs and junior mathematicians. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Cohomology rings of projective models of toric arrangements I will describe, by providing generators, relations and examples, the cohomology rings of projective models of toric arrangements (joint work with Corrado De Concini). Arithmetic matroids, posets and cohomology of toric arrangements Matroids are cryptomorphic to geometric lattices and from an oriented matroid can be build an Orlik-Solomon algebra. This algebra is the cohomology algebra of the complement of an arrangement of hyperplanes or of pseudospheres, hence the Tutte polynomial specializes to the Poincaré polynomial of the complement. Recent works introduced arithmetic matroids and studied their relations with the cohomology algebra of toric arrangements. We will discuss the relation between arithmetic matroids and posets of layers of toric arrangements. This study leads to a construction - from the poset of layers - of a "toric Orlik-Solomon algebra" isomorphic to the cohomology algebra of the complement of the toric arrangement. Indeed, the Poincaré polynomial of the complement is a specialization of the arithmetic Tutte polynomial. Categories of matroids, Hopf algebras, and Hall algebras In their recent paper, Baker and Bowler introduced the notion of matroids over partial hyperstructures which unifies various generalizations (including oriented, valuated, and phase matroids). One can generalize the notion of minors and direct sums (of matroids) to the case of matroids over partial hyperstructures. In particular, this allows one to generalize the matroid-minor Hopf algebra to this setup. We then investigate the category of (ordinary) matroids, showing that the matroid-minor Hopf algebra is dual to the Hall algebra associated to the category of matroids. This is joint work with Chris Eppolito and Matt Szczesny. |
| 3:00pm - 5:00pm | MS168, part 2: Riemann Surfaces |
| Unitobler, F-107 | |
|
|
3:00pm - 5:00pm
Riemann Surfaces In the past decades, the central role played by Riemann surfaces in pure mathematics has been strengthened with their surprising appearance in string theory, cryptography and material science. This minisymposium is intended for the curve theorists and the avant-garde applied mathematician. Our emphasis will be on the computational aspects of Riemann surfaces that are prominent in pure mathematics but are not yet part of the canon of applied mathematics. Some of the subjects that will be touched upon by our speakers are integrable systems, Teichmüller curves, Arakelov geometry, tropical geometry, arithmetic geometry and cryptography of curves. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Computing endomorphism rings of Jacobians Let C be a curve over a number field, with Jacobian J, and let End(J) be the endomorphism ring of J. The ring End(J) is typically isomorphic to ZZ, but the cases where it is larger are interesting for many reasons, most of all because the corresponding curves can then often be matched with relatively simple modular forms. We give a provably correct algorithm to verify the existence of additional endomorphisms on a Jacobian, which to our knowledge is the first such algorithm. Conversely, we also describe how to get upper bounds on the rank of End(J). Together, these methods make it possible to completely and explicitly determine the endomorphism ring End(J) starting from an equation for C, with acceptable running time when the genus of C is small. This is joint work with Edgar Costa, Nicolas Mascot, and John Voight. Inverse Jacobian problem for cyclic plane quintic curves We consider the problem of computing the equation of a curve with given analytic Jacobian, that is, with a certain period matrix. In the case of genus one, this can be done by using the classical Weierstrass function, and it is a key step if one wants to write down equations of elliptic curves with complex multiplication (CM). Also in higher genus, the theory of CM gives us all period matrices of principally polarized abelian varieties with CM, among which the periods of the curves whose Jacobian has CM, and computing curve equations is the hardest part. Beyond the classical case of elliptic curves, efficient solutions to this problem are now known for both genus~2 and genus~3. In this talk I will give a method that deals with the case y5 = a5x5 + ... + a1x + a0, inspired by some of the ideas present in the method for the genus-3 family of Picard curves y3 = x(x-1)(x-λ)(x-μ). Teichmüller curves, Kobayashi geodesics and Hilbert modular forms Teichmüller curves are totally geodesic curves inside the moduli space of Riemann surfaces. By results of Möller, they can always be seen as Kobayashi geodesics inside a Hilbert modular variety parametrising abelian varieties with real multiplication. Our main objective is to cut Teichmüller curves out as the vanishing locus of a Hilbert modular form in order to calculate their Euler characteristics. The building blocks of these modular forms turn out to be certain theta functions and their derivatives, which can be made very precise. Counting special points on teichmüller curves A flat surface is a Riemann surface together with the choice of a non-zero holomorphic differential. The moduli space of flat surfaces admits a natural SL2(R) action and the closed orbits are Teichmüller curves in the moduli space of Riemann surfaces. While a lot of the original motivation stems from dynamical systems, the known examples of families of such Teichmüller curves carry a surprising amount of arithmetic information. This permits explicit formulas for the genus, the number of cusps and the number and types of orbifold points as well as, in many cases, precise asymptotic behavior of these numbers. |
| 3:00pm - 5:00pm | MS175, part 1: Algebraic geometry and combinatorics of jammed structures |
| Unitobler, F-111 | |
|
|
3:00pm - 5:00pm
Algebraic geometry and combinatorics of jammed structures The minisymposium will combine the classical rigidity theory of linkages in discrete and computational geometry with the theory of circle packing, and patterns, on surfaces that arose from the study of 2- and 3-manifolds in geometry and topology. The aim being to facilitate interaction between these two areas. The classical theory of rigidity goes back to work by Euler and Cauchy on triangulated Euclidean polyhedra. The general area is concerned with the problem of determining the nature of the configuration space of geometric objects. In the modern theory the objects are geometric graphs (bar-joint structures) and the graph is rigid if the configuration space is finite (up to isometries). More generally one can consider tensegrity structures where distance constraints between points can be replaced by inequality constraints. The theory of (circle, disk and sphere) packings is vast and well known, with numerous practical applications. Of particular relevance here are conditions that result in the packing being non-deformable (jammed) as well as recent work on inversive distance packings. These inversive distance circle packings generalised the much studied tangency and overlapping packings by allowing ``adjacent'' circles to be disjoint, but with the control of an inversive distance parameter that measures the separation of the circles. The potential for overlap between these areas can be easily seen by modelling a packing of disks in the plane by a tensegrity structure where each disk is replaced by a point at its centre and the constraint that the disks cannot overlap becomes the constraint that the points cannot get closer together. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Flexibility of graphs on the sphere: the case of K_{3,3} We present a study of necessary conditions for the edge lengths of minimally rigid graphs (Laman graphs) that make them mobile on the sphere. This is made possible by interpreting realizations of a graph on the sphere as elements of the moduli space of rational stable curves with marked points. By analyzing how curves of realizations intersect the boundary of this moduli space, we obtain a combinatorial characterization, in terms of colorings, for the existence of edge lengths that allow flexibility. We then give a classification of possible motions on the sphere of the bipartite graph with 3+3 vertices, for which no two vertices coincide. This is a joint work with Georg Grasegger, Jan Legerský, and Josef Schicho. Algebraic Geometry for Counting Realizations of Minimally Rigid Graphs Minimally rigid graphs (Laman graphs) are defined to have only finitely many realizations in the euclidean plane, up to rotations and translations. It is known that the same graphs are also minimally rigid on the sphere. In this talk we present recent counting algorithms for this finite number of complex realizations both in the plane and on the sphere. Based on systems of polynomial equations we intrinsically use two different approaches from algebraic geometry to prove the algorithms. The necessary computations are, however, purely combinatorial and can be viewed in terms of graphs. Pairing symmetry groups for spherical and Euclidean frameworks In this talk we will discuss the effect of symmetry on the infinitesimal rigidity of spherical frameworks and Euclidean bar-joint and point-hyperplane frameworks in general dimension. In particular we show that, under forced or incidental symmetry, infinitesimal rigidity for bar-joint frameworks with a set X of vertices collinear, spherical frameworks with vertices in X on the equator, and point-hyperplane frameworks with the vertices in X representing hyperplanes are all equivalent. We then show, again under forced or incidental symmetry, that infinitesimal rigidity properties under certain symmetry groups can be paired, or clustered, under inversion on the sphere so that infinitesimal rigidity with a given group is equivalent to infinitesimal rigidity under a paired group. The fundamental basic example is that mirror symmetric rigidity is equivalent to half-turn symmetric rigidity on the 2-sphere. With these results in hand we can deduce some combinatorial consequences for the rigidity of both spherical and Euclidean frameworks. This is joint work with Katie Clinch, Anthony Nixon and Walter Whiteley. |
| 3:00pm - 5:00pm | MS178: Geometric design for fabrication |
| Unitobler, F-112 | |
|
|
3:00pm - 5:00pm
Geometric design for fabrication Geometric modeling in the early design phase typically consists of pure shape design with little or no consideration of material properties, functionality and fabrication. The separation of geometry from engineering and manufacturing results in a costly product development process with multiple feedback loops. This minisymposium presents recent research on computational design tools which respect material properties and constraints imposed by function and fabrication. To achieve high performance, the additional constraints are closely tied to an adapted geometric representation or even formulated in terms of geometry. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Geometric modeling of flank CNC machining Geometric modeling is very closely related to manufacturing in situations when objects modeled in the digital realm are subsequently manufactured. The leading manufacturing technology is Computer Numerically Controlled (CNC) machining and this talk will focus on the finishing stage called flank milling. At this stage of machining, high accuracy of few micrometers for objects of size of tens of centimeters is needed and therefore the path-planning algorithms have to be carefully designed to respect synergy between the geometry of the milling tool and the input object. I will discuss two recent projects that look for the best initialization of a conical milling tool and the sequential path-planning algorithm. Finally, I will discuss future research directions towards machining with custom-shaped milling tools. Modeling developable surfaces through orthogonal geodesics We present a discrete theory for modeling developable surfaces through quadrilateral meshes satisfying simple angle constraints, termed discrete orthogonal geodesic nets (DOGs). Our model is simple, local, and, unlike previous works, it does not directly encode the surface rulings. We prove and experimentally demonstrate strong ties to smooth developable surfaces including a set of convergence theorems. We show that the constrained shape space of DOGs is locally a manifold of a fixed dimension, apart from a set of singularities, implying that generally DOGs are continuously deformable. Smooth flows can then be constructed by a smooth choice of vectors on the manifold’s tangent spaces, selected to minimize a desired objective function under a given metric. We show how to compute such vectors, and we use our findings to devise a geometrically meaningful way to handle singular points. We base our shape space metric on a novel DOG Laplacian operator, which is proved to converge under sampling of an analytical orthogonal geodesic net. We apply the developed tools in an editing system for developable surfaces that supports arbitrary bending, stretching, cutting, (curved) folds, as well as smoothing and subdivision operations Developability of triangle meshes Developable surfaces can be fabricated by smoothly bending flat pieces of material without stretching or shearing. This enables a variety of fabrication methods, such as fabrication from flat material or 5-axis CNC milling. We introduce a discrete definition of developability for triangle meshes which exactly captures two key properties of smooth developable surfaces, namely flattenability and presence of straight ruling lines, and show the importance of both of these properties. This definition provides a starting point for algorithms in developable surface modeling - we consider a variational approach that drives a given mesh toward developable pieces separated by regular seam curves. Computation amounts to gradient-based optimization of an energy with support in the vertex star, without the need to explicitly cluster patches or identify seams. We also explore applications of this energy to developable design and manufacturing. Statics-aware design of freeform architecture The design of 3D structures for architecture is not only geometric but involves financial, legal and statics considerations. It would be very valuable if design tools could incorporate some of these aspects already in an early state of design, in an interactive manner. In this presentation we show examples of how statics - both as a constraint and as an optimization target - can feature in the design of wide-span lightweight structures. We discuss a discretization of the Airy stress potential and its connection to selfsupporting surfaces and weight optimization. |
| 3:00pm - 5:00pm | MS184, part 2: Algebraic geometry for kinematics, mechanism science, and rigidity |
| Unitobler, F-113 | |
|
|
3:00pm - 5:00pm
Algebraic geometry for kinematics, mechanism science, and rigidity Mathematicians became interested in problems concerning mobility and rigidity of mechanisms as soon as study of the subject began. Algebraists and geometers among them, notably Clifford and Study, developed tools still used today to investigate pertinent questions in the field. Recent renewed interest in techniques of algebraic geometry applied to kinematics and rigidity led to a modern classification of mechanisms, discovery of new families, development of algorithms for path planning and overall better understanding of rigid structures and configurations. A wide variety of techniques has been used in this regard and it is reasonable to expect that further influence of algebraic geometry upon kinematics and rigidity will produce deeper understanding leading to useful advancement of technology. We will focus on topics in algebraic geometry motivated by kinematics and rigidity or algebraic geometry methodology with potential application in kinematics and rigidity. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Bond theory and linkages with joints of helical type Linkages are rigid bodies assembled together by mechanical joints that allow for movement between the bodies when there is no physical constraint between them. These are arranged in 3-dimensional Euclidean space forming a closed loop. When the number of joints is not high enough, linkages are not mobile in general and are called overconstrained. However, mobile overconstrained linkages do exist and usually present very special geometric arrangements. A very recent algebraic tool used in order to retrieve what these arrangements might be is called bond theory and it has been applied in the quest for understanding and classifying such linkages. In this talk we explore how one can deal with the particular class of linkages containing helical joints following an algebraic point of view. Polygon spaces and other compactifications of M_{0,n}: Chow ring, \psi-classes and intersection numbers The moduli space of n-punctured rational curves M_{0,n} and its compactifications is a classical object, bringing together algebraic geometry, combinatorics, and topological robotics. Recently, D.I.Smyth classified all modular compactifications of M_{0,n}. In particular, an Alexander self-dual complex gives rise to a compactification of M_{0,n}, called ASD compactification. ASD compactifications include (but are not exhausted by) the polygon spaces, or the moduli spaces of flexible polygons. We make use of an interplay between different compactifications, and: describe the Chow rings of the ASD compactifications; compute for ASD compactifications the associated Kontsevich's psi-classes, their top monomials, and give a recurrence relation for the top monomials. Oversimplifying, the main approach is as follows. Some (but not all) ASD compactifications are the well-studied polygon spaces. A polygon space corresponds to a threshold Alexander self-dual complex. Its cohomology ring (which equals the Chow ring) is known due to J.-C. Hausmann and A. Knutson, and A. Klyachko. We shall use a computation-friendly presentation of the ring. Due to Smyth, all the modular compactifications correspond to preASD complexes, that is, to those complexes that are contained in an ASD complex. A removal of a facet of a preASD complex amounts to a blow up of the associated compactification. Each ASD compactification is achievable from a threshold ASD compactification by a sequence of blow ups and blow downs. Since the changes in the Chow ring are controllable, one can start with a polygon space, and then (by elementary steps) reach any of the ASD compactifications and describe its Chow ring. M. Kontsevich's psi-classes arise here in a standard way. Their computation of is a mere modification of the Chern number count for the tangent bundle over S^2 (a classical exercise in a topology course). The recursion and the top monomial counts follow. Distinguishing metal-organic frameworks Metal-organic frameworks are nanoporous crystalline materials that consist of metal centres that are connected by organic linkers. We consider two metal-organic frameworks as identical if they share the same bond network respecting the atom types. An algorithm is presented that decides whether two metal-organic frameworks are the same. It is based on distinguishing structures by comparing a set of invariants that is obtained from the bond network. We demonstrate our algorithm by analyzing the CoRe MOF database of DFT optimized structures with DDEC partial atomic charges using the program package ToposPro. This work is joint work with Zhenia Alexandrov, Davide Proserpio, and Berend Smit Degree Reduction of Rational Motions A rational motion can be represented by a polynomial in one indeterminate with coefficients in SE(3). In the matrix model of SE(3), the degree of the trajectories and the motion itself coincide. This is not the case for the dual quaternion model. A rational motion represented by a polynomial p in DH[t] has in general trajectories of degree 2 deg(p). However, polynomials where the degree of the trajectories is less than 2 deg p exist. In this case we speak of degree reduction. A necessary condition for degree reduction is existence of real polynomial factors in the primal part of p. In general each such factor decreases the trajectory degree by the amount of it’s own degree. There are motions with trajectories of even lower degree. We call this phenomenon exceptional degree reduction . An example of such a motion is the Darboux motion where deg p = 3, the primal part of p has a real polynomial factor of degree 2 but the degree of the trajectories is only 2. The Darboux motion also exhibits the rather strange property that the trajectory degree of the inverse motion, given by the conjugate polynomial bar{p}, has trajectory degree 4. Exceptional degree reduction can be explained in terms of one family of rulings on a certain quadric in the kinematic image space - a geometric entity which is not invariant with respect to conjugation. Moreover, our considerations yield a method to systematically construct rational motions with exceptional degree reduction. So far, the Darboux motion and its planar version were the only examples known to us. Further, we give a condition for rational motions to have a representation of lower degree in the extended kinematic image space. |
| 3:00pm - 5:00pm | MS157, part 2: Graphical models |
| Unitobler, F-121 | |
|
|
3:00pm - 5:00pm
Graphical Models Graphical models are used to express relationships between random variables. They have numerous applications in the natural sciences as well as in machine learning and big data. This minisymposium will feature talks on several different types of graphical models, including latent tree models, max linear models, network models, boltzman machines, and non-Gaussian graphical models, each of which exploits their intrinsic algebraic, geometric, and combinatorial structure. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Interventional Markov Equivalence for Mixed Graph Models Abstract: We will discuss the problem of characterizing Markov equivalence of graphical models under general interventions. Recently, Yang et al. (2018) gave a graphical characterization of interventional Markov equivalence for DAG models that relates to the global Markov properties of DAGs. Based on this, we extend the notion of interventional Markov equivalence using global Markov properties of loopless mixed graphs and generalize their graphical characterization to ancestral graphs. On the other hand, we also extend the notion of interventional Markov equivalence via modifications of factors of distributions Markov to acyclic directed mixed graphs. We prove these two generalizations coincide at their intersection; i.e., for directed ancestral graphs. This yields a graphical characterization of interventional Markov equivalence for causal models that incorporate latent confounders and selection variables under assumptions on the intervention targets that are reasonable for biological applications. Sequential Monte Carlo-based inference in decomposable graphical models We shall discuss a sequential Monte Carlo-based approach to approximation of probability distributions defined on spaces of decomposable graphs, or, more generally, spaces of junction (clique) trees associated with such graphs. In particular, we apply a particle Gibbs version of the algorithm to Bayesian structure learning in decomposable graphical models, where the target distribution is a junction tree posterior distribution. Moreover, we use the proposed algorithm for exploring certain fundamental combinatorial properties of decomposable graphs, e.g. clique size distributions. Our approach requires the design of a family of proposal kernels, so-called junction tree expanders, which expand junction trees by connecting randomly new nodes to the underlying graphs. The performance of the estimators is illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. CausalKinetiX: Learning stable structures in kinetic systems Learning kinetic systems from data is one of the core challenges in many fields. Identifying stable models is essential for the generalization capabilities of data-driven inference. We introduce a computationally efficient framework, called Causal KinetiX, that identifies structure from discrete time, noisy observations, generated from heterogeneous experiments. The algorithm assumes the existence of an underlying, invariant kinetic model. The results on both simulated and real-world examples suggests that learning the structure of kinetic systems can indeed benefit from a causal perspective. The talk is based on joint work with Niklas Pfister and Stefan Bauer. It does not require prior knowledge on causality or kinetic systems. Autoencoders memorize training images The ability of deep neural networks to generalize well in the overparameterized regime has become a subject of significant research interest. We show that overparameterized autoencoders exhibit memorization, a form of inductive bias that constrains the functions learned through the optimization process to concentrate around the training examples, although the network could in principle represent a much larger function class. In particular, we prove that single-layer fully-connected autoencoders project data onto the (nonlinear) span of the training examples. In addition, we show that deep fully-connected autoencoders learn a map that is locally contractive at the training examples, and hence iterating the autoencoder results in convergence to the training examples. |
| 3:00pm - 5:00pm | MS134, part 4: Coding theory and cryptography |
| Unitobler, F-122 | |
|
|
3:00pm - 5:00pm
Coding theory and cryptography The focus of this proposal is on coding theory and cryptography, with emphasis on the algebraic aspects of these two research fields.Error-correcting codes are mathematical objects that allow reliable communications over noisy/lossy/adversarial channels. Constructing good codes and designing efficient decoding algorithms for them often reduces to solving algebra problems, such as counting rational points on curves, solving equations, and classifying finite rings and modules. Cryptosystems can be roughly defined as functions that are easy to evaluate, but whose inverse is difficult to compute in practice. These functions are in general constructed using algebraic objects and tools, such as polynomials, algebraic varieties, and groups. The security of the resulting cryptosystem heavily relies on the mathematical properties of these. The sessions we propose feature experts of algebraic methods in coding theory and cryptography. All levels of experience are represented, from junior to very experienced researchers. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Pairing-friendly curves in cryptography Pairings on elliptic curves are involved in signatures, NIZK, and recently in blockchains (ZK-SNARKS). These pairings take as input two points on an elliptic curve E over a finite field, and output a value in an extension of that finite field. Usually for efficiency reasons, this extension degree is a power of 2 and 3 (such as 12,18,24), and moreover the characteristic of the finite field has a special form. The security relies on the hardness of computing discrete logarithms in the group of points of the curve and in the finite field extension. In 2013-2016, new variants of the function field sieve and the number field sieve algorithms turned out to be faster in certain finite fields related to pairing-based cryptography, in particular those which had a very efficient arithmetic. Now small characteristic settings are discarded. The situation for GF(p^k) where p is prime and k is small is still quite unclear. We refine the work of Menezes-Sarkar-Singh and Barblescu-Duquesne to estimate the cost of a hypothetical implementation of the Special-Tower-NFS in GF(p^k) for small k, and deduce parameter sizes for cryptographic pairings. On a question of F.R.K. Chung and its relevance to the discrete logarithm problem in extension fields We consider a question possibly first raised by F.R.K. Chung in 1989 regarding the representation of elements of GF(q^n) as a product of linear elements, whose bearing on the discrete logarithm problem seems not to be well-known. Using the ring structure to solve Ring-Learning-with-Errors Ring-Learning-with-Errors is a lattice-based hard problem proposed for post-quantum cryptography. This problem has become very popular, due to its apparent quantum-safety and its adaptability to cryptographic applications, such as homomorphic encryption. It has security reductions to more familiar lattice problems. But Ring-Learning-with-Errors is usually built on two-power cyclotomic rings, and it is natural to ask if there are attacks on these problems based on the ring structure. I will discuss the ring-theoretic structure and how to exploit it to obtain some potential speedups over generic lattice algorithms. MDP convolutional codes Maximum distance profile (MDP) convolutional codes have the property that their column distances are as large as possible. It has been shown that, transmitting over an erasure channel, these codes have optimal recovery rate for windows of a certain length. Additionally, the subclass of complete MDP convolutional codes has the ability to reduce the waiting time during decoding. Hence, it is possible to develop quite efficient decoding algorithms over the erasure channel for these codes. The existence of MDP and complete MDP convolutional codes for arbitrary rate and degree has been shown for sufficiently large field sizes. Moreover, there exist basically two general construction techniques for these codes, which we will present here. However, one could see that these constructions require very large field sizes but at least the second of these constructions works for arbitrary characteristic of the field. Therefore, one goal is to investigate, which field sizes are possible in order that MDP or complete MDP convolutional codes with given rate and degree could exist. Furthermore, we aim to construct such codes over fields of possibly small size, starting to try this for rather small values for the code parameters. |
| 3:00pm - 5:00pm | MS132, part 3: Polynomial equations in coding theory and cryptography |
| Unitobler, F-123 | |
|
|
3:00pm - 5:00pm
Polynomial equations in coding theory and cryptography Polynomial equations are central in algebraic geometry, being algebraic varieties geometric manifestations of solutions of systems of polynomial equations. Actually, modern algebraic geometry is based on the use of techniques for studying and solving geometrical problems about these sets of zeros. At the same time, polynomial equations have found interesting applications in coding theory and cryptography. The interplay between algebraic geometry and coding theory is old and goes back to the first examples of algebraic codes defined with polynomials and codes coming from algebraic curves. More recently, polynomial equations have found important applications in cryptography as well. For example, in multivariate cryptography, one of the prominent candidates for post-quantum cryptosystems, the trapdoor one-way function takes the form of a multivariate quadratic polynomial map over a finite field. Furthermore, the efficiency of the index calculus attack to break an elliptic curve cryptosystem relies on the effectiveness of solving a system of multivariate polynomial equations. This session will feature recent progress in these and other applications of polynomial equations to coding theory and cryptography. (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Classical and Quantum Evaluation Codes at the Trace Roots We introduce a new class of evaluation linear codes by evaluating polynomials at the roots of a suitable trace function. We give conditions for self-orthogonality of these codes and their subfield-subcodes with respect to the Hermitian inner product. They allow us to construct stabilizer quantum codes over several finite fields which substantially improve the codes in the literature. For the binary case, we obtain records at http://codetables.de/. Moreover, we obtain several classical linear codes over the field with four elements which are records at http://codetables.de/. Joint work with C. Galindo and F. Hernando (Jaume I University). Optimal curves and codes with locality In some applications, it is desirable to have erasure codes that have recovery algorithms for a relatively large number of missing pieces (erasures). To maintain data availability at all times, it is advantageous to recover information at one node, which may fail or be offline, by accessing a small number of other nodes. This leads to the notion of local recovery, meaning that for a code C of length n, a codeword symbol can be recovered by accessing at most r other coordinates of the codeword; the code C is then said to have locality r. Though there are tradeoffs in terms of the rate and minimum distance, one typically wants r small, so that communications of information from other locations is minimal, hence saving communications bandwidths. In addition, it is often desirable for each coordinate to have multiple recovery sets; such a code is said to have availability. In this talk, we consider codes with locality and availabilty constructed from optimal curves. The Story of Solving Random Quadratic Multivariate Systems of Equations Solving quadratic multivariate systems over finite fields is one of the fundamental problem in computer science and cryptography. In fact, Shannon is said to have remarked that breaking a good cipher should be as hard as solving a system of nonlinear equations. Exactly how hard that really is has been an interesting open problem. We discuss the interesting history and recent developments in solving multivariate quadratic systems, particularly that over GF(2). The Zeta Function for Generalized Rank Weights The zeta function of a linear block code with the Hamming metric encodes its weight distribution in a convenient way. It is particularly useful to analyze the structural properties of a family of codes that share the same weight enumerator. The definition of the zeta function is motivated by the properties of codes with the Hamming weight obtained from algebraic curves via Goppa's construction. The rank-metric analogue of the zeta function is defined as the generating function of the normalized q-binomial moments of a matrix code endowed with the rank distance. This algebraic object is a code invariant with respect to puncturing and shortening operations, and links the rank distribution of codes to a Riemann-type hypothesis in the context of coding theory. In the first part of the talk we present the main definitions and results on the theory of rank-metric zeta functions. We then extend this concept to generalized distributions of matrix codes, and discuss the duality theory of these. In particular, we present a generalized version of the MacWilliams identities for rank-metric codes, and prove some rigidity properties of extremal codes with respect to generalized distributions. (the new results in this talk are joint work with E. Byrne and A. Ravagnani) |
| 3:00pm - 5:30pm | MS147, part 2: SC-square 2019 workshop on satisfiability checking and symbolic computation |
| Unitobler, F005 | |
|
|
SC-square 2019 workshop on satisfiability checking and symbolic computation Symbolic Computation is concerned with the algorithmic determination of exact solutions to complex mathematical problems; some recent developments in the area of Satisfiability Checking are starting to tackle similar problems, however with different algorithmic and technological solutions. The two communities share many central interests, but so far researchers from these two communities rarely interact. Furthermore, the lack of compatible interfaces for tools from the two areas is an obstacle to their fruitful combination. Bridges between the communities in the form of common platforms and road-maps are necessary to initiate a mutually beneficial exchange, and to support and direct their interaction. The aim of this workshop is to provide fertile ground to discuss, share knowledge and experience across both communities. The topics of interest include but are not limited to:
The 2016 and 2017 editions of the workshop were affiliated to conferences in Symbolic Computation. The 2018 edition was affiliated to FLoC, the international federated logic conference. More information at http://www.sc-square.org/workshops.html (25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise) Regular Paper 3 of SC-Square: Algorithmically generating new algebraic features of polynomial systems for machine learning There are a variety of choices to be made in both computer algebra systems and satisfiability modulo theory (SMT) solvers which can impact performance without affecting mathematical correctness of the end result. Such choices are candidates for machine learning (ML) approaches, however, there are difficulties in applying standard ML techniques, such as the efficient identification of ML features from input data which is typically a polynomial system. Our focus is selecting the variable ordering for cylindrical algebraic decomposition (CAD), an important algorithm in Symbolic Computation which is also now used and adapted for SMT-solvers. We studied prior ML work here and recognised a framework around the features used. Enumerating all options in this framework led to the automatic generation of many additional features. We validate the usefulness of these with an experiment which shows that an ML choice for CAD variable ordering is superior to those made by human created heuristics, and further improved with these additional features. We expect that this technique of feature generation could be useful for other choices related to CAD, or even choices for other algorithms with polynomial systems for input. Extended Abstract 1 of SC-Square: On variable orderings in MCSAT for non-linear real arithmetic Satisfiability-modulo-theories (SMT) is a technique for checking the satisfiability of logical formulas. In this context, a framework called model-constructing satisfiability calculus (MCSAT) was introduced which allows the simultaneous construction of the Boolean and theory model enabling more freedom for decision on the models variables. In this paper we report on implementation issues for non-linear real arithmetic and our work in progress on heuristics for decision orderings on variables. Extended Abstract 2 of SC-Square: On Benefits of Equality Constraints in Lex-Least Invariant CAD McCallum was the first to show that it was possible to reduce the projection set for quantifier elimination problems which have equality constraints. Lazard provided a projection operator that reduces the projection set as compared to Collins' original algorithm. In this paper, we aim to extend Lazard's work and provide a modification to his projection operator that reduces the projection set even further when there is an equality constraint in the quantifier elimination problem. This is similar to McCallum's modification and outputs a sign-invariant CAD: consequently, it cannot be used inductively, but only in the first step of the projection phase. In the further steps of the projection phase, we use Lazard's original projection operator. Nonetheless, reducing the output in the first step has a domino effect throughout the remaining steps, which significantly reduces the complexity. Extended Abstract 3 of SC-Square: Evolutionary Virtual Term Substitution in a Quantifier Elimination System Quantifier Elimination over real closed fields (QE) is a topic on the borderline of both the Satisfiability Checking and Symbolic Computation communities, where quantified statements of polynomial constraints may be relevant to solving a Satisfiability Modulo Theory (SMT) problem. Feasible algorithms for QE date as far back as 1975 with Cylindrical Algebraic Decomposition (CAD), and Virtual Term Substitution (VTS) in 1988. While implementations of these can be found in software such as QEPCAD and Redlog, they are not often found together, and especially not used concurrently in terms of one poly-algorithm. This paper briefly explores the implications of such a poly-algorithm between CAD and VTS, which the author is presently developing as part of a package in collaboration with Maplesoft, intended to make its way into a future version of Maple. One such implication of the system requires incremental CAD to be effective, which has already had some attention in this workshop series via. This paper in particular focuses on proof of concept new methods for incremental and decremental VTS, that works for any multivariate problem previously solved by VTS. This may not only be desirable for QE when used in an SMT system, but we also discuss its potential ramifications when used in the author’s work in progress poly-algorithmic QE system, where users may be interested in incrementality and decrementality for stock QE. Extended Abstract 4 of SC-Square: Lemmas for Satisfiability Modulo Transcendental Functions via Incremental Linearization Incremental linearization is a conceptually simple, yet effective, technique that we have recently proposed for solving satisfiability problems over nonlinear real arithmetic constraints, including transcendental functions. A central step in the approach is the generation of linearization lemmas, constraints that are added during search to the SMT problem and that form a piecewise-linear approximation of the nonlinear functions in the input problem. It is crucial for both the soundness and the effectiveness of the technique that these constraints are valid (to not remove solutions) and as general as possible (to improve their pruning power). In this paper, we provide more details about how linearization lemmas are generated for transcendental functions, including proofs of their soundness. Such details, which were missing in previous publications, are necessary for an independent reimplementation of the method. |
| 5:15pm - 6:00pm | SI(AG)^2 Early Career Prize Lecture: Elina Robeva: Orthogonal Tensor Decomposition |
| vonRoll, Fabrikstr. 6, 001 | |
|
|
Orthogonal Tensor Decomposition Massachusetts Institute of Technology, United States of America Tensor decomposition has many applications. However, it is often a hard problem. In this talk we will discuss a family of tensors, called orthogonally decomposable, which retain some of the properties of matrices that general tensors don't. A symmetric tensor is orthogonally decomposable if it can be written as a linear combination of tensor powers of n orthonormal vectors. As opposed to general tensors, such tensors can be decomposed efficiently. We study the spectral properties of symmetric orthogonally decomposable tensors and give a formula for all of their eigenvectors. We also give polynomial equations defining the set of all such tensors. Analogously, we study nonsymmetric orthogonally decomposable tensors, describing their singular vector tuples and giving polynomial equations that define them. To extend the definition to a larger set of tensors, we define tight-frame decomposable tensors and study their properties. Finally, we conclude with some open questions and future research directions. |
| 5:15pm - 6:00pm | SI(AG)^2 Early Career Prize Lecture streamed from 001: Elina Robeva: Orthogonal Tensor Decomposition |
| vonRoll, Fabrikstr. 6, 004 | |
