batch normalization: accelerating deep network training by reducing internal covariate shift

Sergey Ioffe, Christian Szegedy

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

Suggested to Venues

  • Added 2 years ago by Yoshua Bengio

  • 3 recommenders

  • 4 Comments

Discussion

  • Yoshua Bengio rated 2 years ago

  • Public

One of the most exciting recent developments in neural network training strategies, but it would be nice to link it to other related work, e.g., natural gradient strategies.

MR

  • Marc'Aurelio Ranzato rated 2 years ago

  • Public

I agree. Related to this, I find the more recent: Natural Neural Networks Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray Kavukcuoglu http://arxiv.org/abs/1507.00210 which I am going to recommend next.. much easier to understand. They also show even stronger results.

SC

  • Soumith Chintala rated 2 years ago

  • Public

One of the most exciting papers of this year indeed. While Natural Neural Networks (PRONG as they call it) looks like a much better theoretical result, their results, while looking strong at the surface, dont look as impressive from a closer look. On Imagenet, they do not report results of their method (PRONG). They only report the result of what they call PRONG+, which is a combination of their method and BatchNorm, which performs about the same (or slightly worse) than batchnorm.

ss

  • santosh srivastava wrote 2 years ago

  • Public

I agree to some degree. For general interest in deep learning, I would be interested in seeing a paper that has developed theory and framework for accurate multi-steps prediction of high dimensional time series data from multiple sources, which has not been possible by other methods.