Industrial and Applied Math

Date Time 
Location  Speaker 
Title – click for abstract 

03/15 5:00pm 
ZOOM 
Dr. Anne Shiu Texas A&M University 
Identifiability of linear compartment models
This talk focuses on the question of how identifiability of a mathematical
model, that is, whether parameters can be recovered from data, is related
to identifiability of its submodels. We look specifically at linear
compartmental models  which arise in many applications including
epidemiology and pharmacokinetics  and investigate whether
identifiability is preserved after adding or removing parts of the model.
In particular, we examine whether identifiability is preserved when an
input, output, edge, or leak is added or deleted. Our results harness
standard differential algebraic techniques, so that the question of
whether a model is (generically, locally) identifiable becomes equivalent
to asking whether the Jacobian matrix of a certain coefficient map,
arising from inputoutput equations, is generically full rank. Along the
way, we discover a new combinatorial formula for these inputoutput
equations, and also pose several conjectures. Additionally, we highlight
the contributions of four undergraduate and three graduate student
coauthors. 

03/29 6:30pm 
Zoom 
Dr. Chak Shing Lee Lawrence Livermore National Lab 
A multilevel coarsening method for graph Laplacians in mixed form and its applications
In this talk, we will discuss a multilevel coarsening method for weighted graph Laplacians formulated in a mixed saddlepoint form. In this formulation, there are two sets of degrees of freedom (dofs), which are respectively associated with the vertices and edges of the underlying graph. We construct coarse spaces for both sets of dofs based on aggregation of vertices exploiting local spectral decompositions. The coarse spaces have provable approximation property, and are constructed such that the infsup stability on coarse levels is maintained. The coarsening procedure is local with high computationtocommunication ratio, the benefit of which is illustrated in a parallel scaling test. An efficient linear solver for the coarse problems is also developed based on the hybridization technique. For applications of the coarsening method, we focus on graph Laplacians coming from the finite volume discretization problems in oil reservoir simulations. Numerical examples demonstrating its applications in constructing eigensolvers for graph Laplacians, multilevel samplers, multilevel Monte Carlo simulations, and nonlinear multigrid solvers will be presented. 

04/19 3:00pm 
Zoom 
Yufei Zhang University of Oxford 
Deep Neural Networks for HighDimensional PDEs in Stochastic Control and Games
In this talk, we discuss the feasibility of algorithms based on deep artificial neural networks (DNN) for the solution of highdimensional PDEs, such as those arising from stochastic control and games. In the first part, we show that in certain cases, DNNs can break the curse of dimensionality in representing highdimensional value functions of stochastic control problems. We then exploit policy iteration to reduce the nonlinear PDEs into a sequence of linear PDEs, which are then further approximated via a multilayer feedforward neural network ansatz. We establish that in suitable settings the numerical solutions and their derivatives converge globally, and further demonstrate that this convergence is superlinear, by interpreting the algorithm as an inexact Newton iteration. Moreover, we construct the optimal feedback controls based on the superlinear convergence of the numerical solutions. Numerical experiments are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method.
This is joint work with Christoph Reisinger and Kazufumi Ito. 

04/26 5:00pm 
Zoom 
Dr. Boris Hanin Princeton University 
Optimization and Generalization in Overparameterized Models
Modern machine learning models, such as neural networks, have a number of theoretically puzzling but empirically robust properties. Chief among them are: (a) neural networks are trained on datasets which are much smaller than the total number of model parameters; (b) training proceeds by empirical risk minimization via a first order method from a random starting point and, despite the nonconvexity of the risk, typically returns a global minimizer; (c) this minimizer of the risk not only fits interpolates the data precisely but also performs well on unseen data (i.e. generalizes). The purpose of this talk is to introduce these fascinating properties and give some basic intuitions for why they might be possible. The emphasis will be on heuristics rather than on precise theorems. 
The organizer for this seminar is Peter Kuchment