home  /  about  /  archive  /  math dept  /  directions  /  contact
SUMIRFAS 2022
Abstracts
Simone Cecchini, University of Goettingen
A long neck principle for Riemannian spin manifolds with positive scalar curvature
I will present results in index theory on compact Riemannian spin manifolds with boundary in the case when the topological information is encoded by bundles which are supported away from the boundary. As a first application, we establish a ``long neck principle'' for a compact Riemannian spin n-manifold with boundary X, stating that if scal(X) ≥ n(n-1) and there is a nonzero degree map f into the n-sphere which is area decreasing, then the distance between the support of the differential of f and the boundary of X is at most π/n. This answers, in the spin setting, a question asked by Gromov. As a second application, we consider a manifold X obtained by removing k pairwise disjoint embedded n-balls from a closed spin n-manifold Y. We show that if scal(X)>𝝈>0 and Y satisfies a certain condition expressed in terms of higher index theory, then the width of a geodesic collar neighborhood of ∂X is at most 𝜋(n-1)/(n𝜎).
Ken Dykema, Texas A&M University
Spectral and decomposable operators in finite von Neumann algebras
We give criteria for the classical notions of spectrality (due to Dunford) and decomposability (due to Foias) for operators belonging to finite von Neumann algebras, in terms of Haagerup-Schultz invariant projections. These allow us to prove that certain operators, including Voiculescu's circular operator, are not spectral.
Piotr Indyk, Massachusetts Institute of Technology
Learning-Based Algorithms: Replacing "Random" with "Learned"
Many of the classic results in geometric functional ​analysis, such as the Johnson-Lindestrauss theorem, are shown by using random structures (e.g., matrices). These results often translate into efficient randomized algorithms for a variety of computational problems. These algorithms are highly efficient for worst inputs, but can be suboptimal for the specific inputs used in particular applications. A recent line of work aims to address this issue by developing algorithms that use machine learning methods​ to improve their performance. Many of them can be viewed as using "learned" matrices (optimized for a particular class of inputs) instead of random ones. In this talk I will present some examples of this type, in the context of streaming and sketching algorithms.
Wencai Liu, Texas A&M University
Quasi-periodic in time solutions of nonlinear random Schrödinger equations
For Anderson model Hλ=Δ+λ V, where Δ is the discrete Lapalcian on the lattice ℤd, λ>0 is the strength of the disorder and V={vn}n ∈ ℤd is a family of independent identically distributed random variables), it is known that in one dimension, Hλ has Anderson localization at arbitrary disorder λ and in dimension d≥ 2, Hλ has Anderson localization at large disorder λ. With Weimin Wang, we proved a nonlinear version, namely we established the persistence of quasi-periodic in time solutions of nonlinear random Schrödinger equations bifurcating from the linear ones.
Chunyang Liao, Texas A&M University
Optimal Recovery in Hilbert spaces from observational data
In Optimal Recovery, the task of learning a function from observational data is tackled deterministically by adopting a worst-case perspective. This talk focuses on the numerical construction of the best approximant and the computation of worst-case error from exact or noisy observational data. We will also discover how the optimal recovery framework relates to Machine Learning and regularization techniques. The talk is based on a joint work with Simon Foucart.
Nikolaos Panagopoulos, University of Houston
On invariant subalgebras of group C* and von Neumann algebras
Given an irreducible lattice Γ in the product of higher rank simple Lie groups, we prove that: (i) every Γ-invariant von Neumann subalgebra of ℒ ( Γ ) is generated by a normal subgroup; and (ii) given a non-amenable unitary representation π of Γ, every Γ-equivariant conditional expectation on Cπ* Γ is the canonical conditional expectation onto the C*-subalgebra generated by a normal subgroup.
Joe Rosenblatt, University of Illinois at Urbana-Champaign
Good Functions for Bad Processes
In many cases in harmonic analysis and ergodic theory, for a stochastic process (Tt ), a generic set 𝒢 of functions f has (Ttf ) not convergent. Exploring the complement of 𝒢, where in fact (Tt f ) does converge, can lead to worthwhile and interesting insights. Examples, results, and open questions of this type are the focus of this talk.
Mrinal Kanti Roychowdhury, The University of Texas-Rio Grande Valley
Quantization for probability distributions
Quantization for probability distributions refers to the idea of estimating a given probability by a discrete probability supported by a set with no more than n points. It has broad application in signal processing and data compression. Quantization dimension gives the speed how fast the specified measure of the error goes to zero as n approaches to infinity. Quantization dimension is also connected with other dimensions of dynamical systems such as Hausdorff, packing and box counting dimensions, I will talk about it.
Ramon van Handel, Princeton University
Vector concentration inequalities
How does the concentration of measure phenomenon extend to functions taking values in normed spaces? This question arises independently in functional analysis, metric geometry, and probability theory. For Gaussian measures, Pisier discovered a powerful principle that explains in complete generality how vector-valued functions concentrate. However, Pisier's argument appears to be very special to the Gaussian measure, and until recently this phenomenon was not known to hold in any other situation. A couple of years ago, the author discovered (with Paata Ivanisvili) a vector concentration principle on the discrete cube, which enabled the resolution of an old question of Enflo in the metric theory of Banach spaces. The situation on the discrete cube remains somewhat special, however, and the core of the argument is based on an explicit computation. Nonetheless, it appears that there is a more fundamental principle underlying all known results that could open the door to a general theory. As a proof of concept, I will describe in this talk a general vector concentration theory that holds under the classical Bakry-Emery condition, which captures many new examples (e.g., all Riemannian manifolds with positive Ricci curvature). I will also discuss various open questions that arise from this approach and potential applications in geometry.
Tianyi Zheng, Univeristy of California-San Diego
Some examples of limit theorems for random walks on groups
We discuss two examples of limit theorems on groups. The first one is a strong law of large numbers on the wreath product over the dimensional lattice: under a sublinear scaling, the displacement of the random walk converges to a constant almost surely. The second one concerns functional central limit theorems for a natural class of long-range random walks on nilpotent groups. The limiting process lives on a nilpotent Lie group which carries an adapted dilation structure and a stable-like process which appears as the limit of a rescaled version of the random walk.