Skip to content
Texas A&M University
Mathematics

Center for Approximation and Mathematical Data Analytics

Date: November 27, 2018

Time: 1:00PM - 2:00PM

Location: BLOC 220

Speaker: David Rolnick, Univ. Penn.

  

Title: Reducing catastrophic forgetting in reinforcement learning

Abstract: Deep learning approaches often fail when the data changes over time. This happens because the weights in a neural network will overwrite what they learned in the past with what they learned most recently. This phenomenon, known as catastrophic forgetting, presents a major impediment to learning in real-world situations. We examine this issue in the context of reinforcement learning, where the goal is to learn one game, then another, and then come back to the first game without forgetting it. While many methods to counteract catastrophic forgetting have been proposed, we find that a simple approach based upon experience replay performs as well or better while requiring less task information. This work is part of a collaboration with Google DeepMind. No background knowledge on reinforcement learning is assumed.