Skip to content
Texas A&M University
Mathematics

Industrial and Applied Math

Spring 2021

 

Date:March 15, 2021
Time:5:00pm
Location:ZOOM
Speaker:Dr. Anne Shiu, Texas A&M University
Title:Identifiability of linear compartment models
Abstract:This talk focuses on the question of how identifiability of a mathematical model, that is, whether parameters can be recovered from data, is related to identifiability of its submodels. We look specifically at linear compartmental models -- which arise in many applications including epidemiology and pharmacokinetics -- and investigate whether identifiability is preserved after adding or removing parts of the model. In particular, we examine whether identifiability is preserved when an input, output, edge, or leak is added or deleted. Our results harness standard differential algebraic techniques, so that the question of whether a model is (generically, locally) identifiable becomes equivalent to asking whether the Jacobian matrix of a certain coefficient map, arising from input-output equations, is generically full rank. Along the way, we discover a new combinatorial formula for these input-output equations, and also pose several conjectures. Additionally, we highlight the contributions of four undergraduate and three graduate student co-authors.

Date:March 29, 2021
Time:6:30pm
Location:Zoom
Speaker:Dr. Chak Shing Lee, Lawrence Livermore National Lab
Title:A multilevel coarsening method for graph Laplacians in mixed form and its applications
Abstract:In this talk, we will discuss a multilevel coarsening method for weighted graph Laplacians formulated in a mixed saddle-point form. In this formulation, there are two sets of degrees of freedom (dofs), which are respectively associated with the vertices and edges of the underlying graph. We construct coarse spaces for both sets of dofs based on aggregation of vertices exploiting local spectral decompositions. The coarse spaces have provable approximation property, and are constructed such that the inf-sup stability on coarse levels is maintained. The coarsening procedure is local with high computation-to-communication ratio, the benefit of which is illustrated in a parallel scaling test. An efficient linear solver for the coarse problems is also developed based on the hybridization technique. For applications of the coarsening method, we focus on graph Laplacians coming from the finite volume discretization problems in oil reservoir simulations. Numerical examples demonstrating its applications in constructing eigensolvers for graph Laplacians, multilevel samplers, multilevel Monte Carlo simulations, and nonlinear multigrid solvers will be presented.

Date:April 19, 2021
Time:3:00pm
Location:Zoom
Speaker:Yufei Zhang, University of Oxford
Title:Deep Neural Networks for High-Dimensional PDEs in Stochastic Control and Games
Abstract:In this talk, we discuss the feasibility of algorithms based on deep artificial neural networks (DNN) for the solution of high-dimensional PDEs, such as those arising from stochastic control and games. In the first part, we show that in certain cases, DNNs can break the curse of dimensionality in representing high-dimensional value functions of stochastic control problems. We then exploit policy iteration to reduce the nonlinear PDEs into a sequence of linear PDEs, which are then further approximated via a multilayer feedforward neural network ansatz. We establish that in suitable settings the numerical solutions and their derivatives converge globally, and further demonstrate that this convergence is superlinear, by interpreting the algorithm as an inexact Newton iteration. Moreover, we construct the optimal feedback controls based on the superlinear convergence of the numerical solutions. Numerical experiments are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method. This is joint work with Christoph Reisinger and Kazufumi Ito.

Date:April 26, 2021
Time:5:00pm
Location:Zoom
Speaker:Dr. Boris Hanin, Princeton University
Title:Optimization and Generalization in Overparameterized Models
Abstract:Modern machine learning models, such as neural networks, have a number of theoretically puzzling but empirically robust properties. Chief among them are: (a) neural networks are trained on datasets which are much smaller than the total number of model parameters; (b) training proceeds by empirical risk minimization via a first order method from a random starting point and, despite the non-convexity of the risk, typically returns a global minimizer; (c) this minimizer of the risk not only fits interpolates the data precisely but also performs well on unseen data (i.e. generalizes). The purpose of this talk is to introduce these fascinating properties and give some basic intuitions for why they might be possible. The emphasis will be on heuristics rather than on precise theorems.