Skip to content
Texas A&M University
Mathematics

Inverse Problems and Machine Learning

Date: November 20, 2019

Time: 12:00PM - 1:00PM

Location: BLOC 628

Speaker: David Rolnick, UPenn

  

Title: Identifying Weights and Architectures of Unknown ReLU Networks

Abstract: The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks. Joint work with Konrad K├Ârding.

.