Abstracts
Florent Baudier, Texas A&M University
Nonlinear rigidity of classes of Banach spaces
A class C of Banach spaces is said to be rigid in a certain nonlinear category (e.g. isometric, Lipschitz,
coarse Lipschitz, coarse...), if we can assert that a Banach space X is a member of C whenever it
nonlinearly embeds into a member of C. We will first recall classical rigidity results in the isometric
and Lipschitz category and more recent coarse Lipschitz rigidity results.
Then we will discuss a very recent joint work with P. Motakis, G. Lancien, and Th. Schlumprecht were it is
shown that the class of reflexive asymptotic-c0 Banach spaces is coarsely rigid.
The rigidity result is obtained by metrically characterizing the class under scrutiny in terms of a concentration
inequality for Lipschitz maps on the Hamming graphs.
The talk will be made accessible to non-specialists.
Boris Hanin, Texas A&M University
Which neural net architectures give rise to exploding and vanishing gradients?
Due to its compositional nature, the function computed by a deep neural net often suffers from the so-called
exploding and vanishing gradient problem (EVGP), which occurs when the derivatives of the function computed by the
neural net vary wildly depending on the parameter with respect to which one takes the derivative. The EVGP happens
especially at the start of training when the weights and biases of the net are random. The purpose of this talk is
to present several rigorous probabilistic results which answer the question of which neural net architectures fall
prey the EVGP in the simple case fully connected ReLU nets.
Galyna Livshyts, Georgia Tech
Are log-concave measures better than log-concave when symmetry is added?
In recent years a number of conjectures have appeared concerning the improvement of
inequalities of Brunn-Minkowski type under the additional assumptions of symmetry;
this includes the B-conjecture, the Gardner-Zvavitch conjecture of 2008, the Log-Brunn-Minkowski conjecture
of 2012, and some variants. The conjecture of Gardner and Zvavitch, also known as the Brunn-Minkowski conjecture,
states that even log-concave measures in ℝn
are in fact 1⁄n-concave with respect to the addition
of symmetric convex sets. In this talk we shall prove that the standard Gaussian measure enjoys
1⁄2n-concavity with respect to centered convex sets.
The improvements to the case of general
log-concave measures shall be discussed as well: under a certain compactness assumption, we show that
any symmetric log-concave measure is indeed better than log-concave with respect to the addition of
symmetric convex sets. This is a joint work with A. Kolesnikov.
Deanna Needell, UCLA
Simple classification from binary data
Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both
hardware implementations and algorithm design. In this talk, we provide a brief background to sparsity and
1-bit measurements, and then present new results on the problem of data classification from binary data
that proposes a framework with low computation and resource costs. We illustrate the utility of the proposed
approach through stylized and realistic numerical experiments, provide a theoretical analysis for a simple case,
and discuss extensions including a hierarchical approach.
Liza Rebrova, University of Michigan
Constructive regularization of random matrix norm
In our work with R. Vershynin we have shown that as long as i.i.d. entries of an n x n random matrix A have
zero mean and finite second moment, the operator norm of A can be regularized to the optimal order
O(sqrt(n))
locally (i.e., by zeroing out a small fraction of the entries of A). A natural question arising from this
result is how to find this small subset of the entries to be deleted.
In this talk I will discuss challenges of the constructive local regularization of ∥A∥,
some known strategies that work in special cases (e.g., for Bernoulli matrices), and a recent result
providing a simple constructive way to regularize norm of a general matrix to the almost optimal order
O(sqrt(log log n) n).
All the results mentioned are non-asymptotic: they hold with high probability (that we find explicitly) for all matrices of large enough size n.
Christopher Schafhauser, University of Waterloo
Quasidiagonality and classification
A concrete C*-algebra A of operators on a Hilbert space H is quasidiagonal if there are
arbitrarily large finite-dimensional subspaces of H almost reducing A. Through a series of deep
results starting with Blackadar and Kirchberg in the mid 90's, it is known that simple, nuclear, quasidiagonal
C*-algebras have a very rigid structure and admit certain internal approximations relevant to the Elliott's
classification programme for simple, nuclear C*-algebras. A long standing question of Blackadar and Kirchberg
asks if all nuclear C*-algebras with a faithful trace are quasidiagonal.
In 2015, substantial progress on the quasidiagonality problem was made by Tikuisis, White, and Winter, and
combined with the work of Elliott, Gong, Lin, and Niu earlier that year, this partial solution to the
quasidiagonality question led to a positive solution to Elliott's classification conjecture.
I will discuss the quasidiagonality and classification problems and how they are related and will also
discuss how a self-contained proof of the quasidiagonality theorem has led to new methods in the classification
and structure theory for simple, nuclear C*-algebras. This is partly based on joint work with
José Carrión, James Gabe, Aaron Tikuisis, and Stuart White.
Konstantin Tikhomirov, Princeton University
On the Banach-Mazur distance to the cross-polytope
We show that for any n>2 there is a symmetric convex body B in Rn
whose Banach-Mazur distance to cross-polytope P is bounded below by
n(9/5)/polylog(n).
This improves upon an earlier result of S.Szarek who showed that there exists B' in Rn
with d(P,B')>n(1/2)log(n). Just as the result of S.Szarek, our proof is based
on studying geometry of random Gaussian polytopes.
Zhizhang Xie, Texas A&M University
Secondary invariants and the Baum-Connes conjecture
Secondary index theoretic invariants naturally arise in geometry and topology. A good understanding of these
secondary invariants have important implications in geometry and topology. For example, in my joint work with
Shmuel Weinberger and Guoliang Yu, we use secondary invariants to study the structure group of a topological
manifold and the space of positive scalar curvature metrics on a given spin manifold. In this talk, I will
consider two different notions of secondary invariants, one K-theoretic and one numerical. I will explain the
connection between the two, and some consequences as a result of this. The talk is based on my joint work with
Guoliang Yu and Jinmin Wang, and the work of Sheagan John.