Harmonic analysis of deep convolutional neural networks
Authors
Thomas WiatowskiReference
Google Brain, Zurich, Switzerland, Nov. 2017, (invited talk).[BibTeX, LaTeX, and HTML Reference]
Abstract
Many practical machine learning tasks employ very deep convolutional neural networks (CNNs). Such large network sizes pose formidable computational challenges in training and operating the network. It is therefore important to understand the impact of network topology and building blocks—convolution filters, non-linearities, and pooling operators—on the network’s feature extraction capabilities. In this talk, we develop a mathematical theory of CNNs for feature extraction using concepts from applied harmonic analysis. We prove that the depth of the network determines the extent to which the extracted features are translation-invariant, and we establish deformation sensitivity bounds that apply to input signal classes such as band-limited functions, cartoon functions, and Lipschitz functions. Moreover, we characterize how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers, and establish conditions to ensure that the extracted features be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input signal. Our results yield handy estimates of the number of layers needed to have at least ((1-epsilon) x 100%) of the input signal energy be contained in the feature vector. Finally, we show how energy-absorbing networks of fixed (possibly small) depth can be designed. The talk represents joint work with H. Bölcskei and P. Grohs.This publication is currently not available for download.