Skip to content
Texas A&M University
Mathematics

Seminar in Random Tensors

Summer 2020

 

Date:May 28, 2020
Time:4:00pm
Location:zoom
Speaker: Ryan O'Donnell, CMU
Title:Quantum Tomography

Date:June 11, 2020
Time:4:00pm
Location:zoom
Speaker:Alperen Ali Ergur, CMU
Title:Counting Zeros of Random Sparse Polynomials
Abstract: We begin with a brief survey of decades long efforts to find a multivariate generalization of Descartes' rule of signs, and then present our probabilistic contribution to the quest. Our goal is to be boring for algebraic geometry experts (sorry!) and to be accessible to a wider audience: anyone with a taste for combinatorics and probability is welcome to enjoy a humble contribution and many open problems. This is joint work with Peter Burgisser and Josue Tonelli-Cueto, the published paper is titled "On the number of real zeros of random fewnomials".

Date:June 18, 2020
Time:2:00pm
Location:zoom
Speaker:Guillaume Aubrun, Lyon 1
Title:Entangleability of cones
Abstract:Given two finite-dimensional cones, one can naturally define their minimal and maximal tensor products. We show that both coincide if and only if one of the cones is simplex-based, as was conjectured by Barker (1976). Our proof involves a mix of convex geometry, elementary algebraic topology, and computations inspired by quantum information theory (arXiv:1911.09663, joint with Ludovico Lami, Carlos Palazuelos and Martin Plavala).

Date:July 2, 2020
Time:04:00am
Location:zoom
Speaker:Pravesh Kothari, CMU
Title:The Sum-of-Squares Approach to Clustering Gaussian Mixtures
Abstract:Sum-of-Squares is a systematic proof system for reasoning about real solutions to systems of multivariate polynomial inequalities over the reals. In the last few years, this proof system has been used to develop a principled method for designing efficient algorithms for average-case algorithmic problems - problems where the inputs are chosen according to some natural probability distribution. By relying on the relationship between sum-of-squares proofs and semidefinite programming, this method reduces designing an efficient algorithm for a given problem to giving a low-degree sum-of-squares certificate of the correctness of a purported solution. In this talk, I'll give an overview of this method by focusing on two recent works on designing algorithms for the problem of clustering mixtures of k Gaussians in d dimensions: 1) A d^( O(log^2 k) ) time algorithm for clustering mixtures of k Gaussians with spherical covariances that succeeds whenever the mixture is information-theoretically clusterable. This algorithm relies on using sum-of-squares certificates of subgaussian moments. 2) A $d^O(k^(O(k)))$ time algorithm for clustering mixtures of k arbitrary Gaussians that succeeds whenever the mixture is clusterable. This algorithm relies on sum-of-squares certificates of hypercontractivity of degree 2 polynomials and anti-concentration of linear projections of Gaussians. The talk should be accessible to a broad audience and should not require any prior knowledge of the problems/techniques. The talk is based on joint works with Jacob Steinhardt and Ainesh Bakshi.

Date:July 9, 2020
Time:04:00am
Location:zoom
Speaker:Shmuel Friedland, UIC
Title:Tensor optimal transport, distance between sets of measures and tensor scaling
Abstract:We introduce the notion of Optimal Tensor Transport (TOT) on the set of product of $d\ge 3$ probability discrete measures. The classical case is $d=2$, which corresponds to matrices. This gives a way to define a metric on the space of $\ell>1$ product of probability measures. We discuss entropic relaxation of TOT and show that the optimal solution of this relaxation is obtained from diagonal scaling of a corresponding positive tensor. We prove the geometrical convergence of the Sinkhorn algorithm via partial minimization of strict convex functions. We give polynomial estimates of the errors to the TOT problem. The full account of our results are in S. Friedland, Tensor optimal transport, distance between sets of measures and tensor scaling, arXiv:2005.00945.

Date:July 30, 2020
Time:04:00am
Location:zoom
Speaker:Visu Makam, IAS
Title:Maximum likelihood estimation for tensor normal models
Abstract: In statistics, maximum likelihood estimation is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called a maximum likelihood estimate (MLE). One important question is to understand whether a given number of samples suffices to expect (almost surely) 1. a bounded likelihood function, 2. existence of an MLE and 3. existence of a unique MLE. For a collection of statistical models called Gaussian group models, connections to invariant theory, in particular the notions of stability, were discovered by Amendola, Kohn, Reichenbach and Seigal. In this talk, we will study tensor normal models, which correspond to tensor actions. Using castling transforms and results on stabilizers in general position, we completely determine for a given number of samples, whether we have almost sure boundedness of likelihood function, existence of MLE and its uniqueness. This is joint work with Harm Derksen and Michael Walter.