Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS157, part 2: Graphical models
Time:
Wednesday, 10/Jul/2019:
3:00pm - 5:00pm

Location: Unitobler, F-121
52 seats, 100m^2

Presentations
3:00pm - 5:00pm

Graphical Models

Chair(s): Elina Robeva (Massachusetts Institute of Technology, United States of America)

Graphical models are used to express relationships between random variables. They have numerous applications in the natural sciences as well as in machine learning and big data. This minisymposium will feature talks on several different types of graphical models, including latent tree models, max linear models, network models, boltzman machines, and non-Gaussian graphical models, each of which exploits their intrinsic algebraic, geometric, and combinatorial structure.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Interventional Markov Equivalence for Mixed Graph Models

Liam Solus
KTH Royal Institute of Technology

Abstract: We will discuss the problem of characterizing Markov equivalence of graphical models under general interventions. Recently, Yang et al. (2018) gave a graphical characterization of interventional Markov equivalence for DAG models that relates to the global Markov properties of DAGs. Based on this, we extend the notion of interventional Markov equivalence using global Markov properties of loopless mixed graphs and generalize their graphical characterization to ancestral graphs. On the other hand, we also extend the notion of interventional Markov equivalence via modifications of factors of distributions Markov to acyclic directed mixed graphs. We prove these two generalizations coincide at their intersection; i.e., for directed ancestral graphs. This yields a graphical characterization of interventional Markov equivalence for causal models that incorporate latent confounders and selection variables under assumptions on the intervention targets that are reasonable for biological applications.

 

Sequential Monte Carlo-based inference in decomposable graphical models

Jimmy Olsson
KTH Royal Institute of Technology

We shall discuss a sequential Monte Carlo-based approach to approximation of probability distributions defined on spaces of decomposable graphs, or, more generally, spaces of junction (clique) trees associated with such graphs. In particular, we apply a particle Gibbs version of the algorithm to Bayesian structure learning in decomposable graphical models, where the target distribution is a junction tree posterior distribution. Moreover, we use the proposed algorithm for exploring certain fundamental combinatorial properties of decomposable graphs, e.g. clique size distributions. Our approach requires the design of a family of proposal kernels, so-called junction tree expanders, which expand junction trees by connecting randomly new nodes to the underlying graphs. The performance of the estimators is illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains.

 

CausalKinetiX: Learning stable structures in kinetic systems

Jonas Peters
University of Copenhagen

Learning kinetic systems from data is one of the core challenges in many fields. Identifying stable models is essential for the generalization capabilities of data-driven inference. We introduce a computationally efficient framework, called Causal KinetiX, that identifies structure from discrete time, noisy observations, generated from heterogeneous experiments. The algorithm assumes the existence of an underlying, invariant kinetic model. The results on both simulated and real-world examples suggests that learning the structure of kinetic systems can indeed benefit from a causal perspective. The talk is based on joint work with Niklas Pfister and Stefan Bauer. It does not require prior knowledge on causality or kinetic systems.

 

Autoencoders memorize training images

Caroline Uhler
MIT

The ability of deep neural networks to generalize well in the overparameterized regime has become a subject of significant research interest. We show that overparameterized autoencoders exhibit memorization, a form of inductive bias that constrains the functions learned through the optimization process to concentrate around the training examples, although the network could in principle represent a much larger function class. In particular, we prove that single-layer fully-connected autoencoders project data onto the (nonlinear) span of the training examples. In addition, we show that deep fully-connected autoencoders learn a map that is locally contractive at the training examples, and hence iterating the autoencoder results in convergence to the training examples.