Stable Architectures for Deep Neural Networks

Eldad Haber, Lars Ruthotto

May, 2017

Abstract:

Deep neural networks have become invaluable tools for supervised machine learning, e.g., in classification of text or images. While offering superior flexibility to find and express complicated patterns in data, deep architectures are known to be challenging to design and train so that they generalize well to new data. An important issue are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper we propose new forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. 
The backbone of our approach is interpreting deep learning as a parameter estimation problem of a nonlinear dynamical system. Given this formulation we analyze stability and well-posedness of deep learning and motivated by our findings develop new architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness to state-of-the-art networks.

Resource Type: