# Events for November 15, 2017 from General and Seminar calendars

## Student Working Seminar in Groups and Dynamics

**Time:** 1:00PM - 2:00PM

**Location:** BLOC 628

**Speaker:** Krzysztof Święcicki

**Title:** *Are L^p and l^p coarse equivalent? II*

**Abstract:**This time I'll prove why there's no equivariant coarse embedding of L^p into l^p and mention how this can be useful for attacking the general problem of coarse equivalence between the two spaces.

## Number Theory Seminar

**Time:** 1:45PM - 2:45PM

**Location:** BLOC 220

**Speaker:** Riad Masri, Texas A&M University

**Title:** *Inequalities satisfied by the Andrews smallest parts function*

**Abstract:**I will discuss a proof of recent conjectures of Chen concerning inequalities satisfied by the Andrews smallest parts function. The proof relies on a new method for bounding coefficients of weak Maass forms which are given as finite sums of singular moduli. This is joint work with Maddie Locus.

**URL:** *Link*

## Numerical Analysis Seminar

**Time:** 3:00PM - 4:00PM

**Location:** BLOC 628

**Speaker:** Paul Hand, Rice University

**Title:** *Deep Compressed Sensing*

**Abstract:**Combining principles of compressed sensing with deep neural network-based generative image priors has recently been empirically shown to require 10X fewer measurements than traditional compressed sensing in certain scenarios. As deep generative priors (such as those obtained via generative adversarial training) improve, analogous improvements in the performance of compressed sensing and other inverse problems may be realized across the imaging sciences. In joint work with Vladislav Voroninski, we provide a theoretical framework for studying inverse problems subject to deep generative priors. In particular, we prove that with high probability, the non-convex empirical risk objective for enforcing random deep generative priors subject to compressive random linear observations of the last layer of the generator has no spurious local minima, and that for a fixed network depth, these guarantees hold at order-optimal sample complexity.

## Postdoc Colloquium Series

**Time:** 4:00PM - 5:00PM

**Location:** Bloc 220

**Speaker:** Julia Plavnik, Texas A&M University

**Title:** *On the classification of modular tensor categories*

**Abstract:**The problem of classifying modular tensor categories is motivated by applications to topological quantum computation as algebraic models for topological phases of matter. These categories have also applications in different areas of mathematics like topological quantum field theory, von Neumann algebras, representation theory, and others. In this talk, we will start by introducing some of the basic definitions and properties of fusion, braided, and modular tensor categories, and we will also give some concrete examples to have a better understanding of their structures. The idea of the talk is to give an overview of the current situation of the classification program for modular categories. We will explain some of the techniques that we found useful to push further the classification, with a focus on new constructions of modular tensor categories. If time allows, we will mention some results for the super-modular case.

## AMUSE

**Time:** 6:00PM - 7:00PM

**Location:** BLOC 220

**Speaker:** Amin Zeiaee, PhD Student, Texas A&M University, Department of Mechanical Engineering

**Title:** *Design and Control of a Novel Rehabilitation Exoskeleton*

**Abstract:**Mathematical techniques are widely used in design and control of robotic systems. This talk will provide a brief overview of some of these methods and will show how these techniques are used in development of a novel rehabilitation exoskeleton, named CLEVERarm, which is designed in the Laboratory for Control, Robotics and Automation at Texas A&M University.

## AMUSE

**Time:** 7:00PM - 7:15PM

**Location:** BLOC 220

**Speaker:** Angelique Morvant, Undergraduate Student, Texas A&M University, Department of Mathematics

**Title:** *Neural Ideals and Receptive Fields*

**Abstract:**Place cells are neurons in the brain that fire when an animal is in a given location, allowing the animal to navigate its environment. The area in which a particular cell fires is called its receptive field, and current research focuses on determining how the firing of place cells encodes spatial information about receptive fields. One way to represent the firing patterns of neurons is by a mathematical object called a neural ideal; the presence of certain polynomials in this ideal gives information about the relationships among the receptive fields. However, it may also be useful to know when the converse is true; that is, when do receptive field relationships tell us which polynomials are in the corresponding neural ideal? The answer to that question is the subject of this talk.