Skip to content
Texas A&M University
Mathematics

Mathematical Physics and Harmonic Analysis Seminar

Date: September 8, 2017

Time: 1:50PM - 2:50PM

Location: BLOC 628

Speaker: Boris Hanin, Texas A&M University

  

Title: Deep Learning: Approximation Theory, Convexity, and the Expressivity of Depth in Neural Networks

Abstract: Deep learning (DL) is the analysis and application of a class of algorithms that in the past few years have become state of the art in a huge number of machine learning problems: playing Go, image recognition/segmentation, machine transcription/translation, to name a few. While DL works incredibly well in practice, a robust mathematical theory of why it works is still in its infancy. The purpose of this talk is to introduce one aspect of this subject. Namely, the expressive power - the ability of a approximate a rich class of functions - of deep neural nets. Some of the first theorems about this were proved in the late 80's and early 90's. I will review them, talk about some more recent work, and point to a few open questions. This talk is based in part on ongoing joint work with Mark Sellke.