Recommend-Papers.org

Discuss papers, and suggest them to specific venues

  • Login to suggest a paper
representation learning by denoising autoencoders for clustering-based classification

Click to view discussion

Moein Owhadi-Kareshk and Mohammad-R. Akbarzadeh-T.

Representation learning is a fast growing approach in machine learning that aims to improve the quality of the input data, instead of insisting on designing complex subsequent learning algorithms. In this paper, we propose to use Denoising AutoEncoders (DAEs), as one of the most effective representation learning methods, in Clustering-based Classification (CC). CC is a multi-class classification solution for large-scale and complicated data sets. In this approach, data are divided into small and simple clusters, which are described by One-Class Classifiers (OCCs). In the proposed Representation Learning for Clustering-based Classification (RLCC), the new representation of each cluster is generated locally to increase the performance of OCCs in term of accuracy. This method still preserves the scalability property as one of the significant advantages of CC methods. RLCC is evaluated with six different data sets from UCI. The results of the experiments show that RLCC has higher generalization power compared to the standard version of CC.

  • Added 2 years ago by Anonymous Cursecrafter Boa

  • 0 Comments

spectral clustering-based classification

Click to view discussion

Moein Owhadi-Kareshk and Mohammad-R. Akbarzadeh-T.

Multi-class classification is a challenging problem in pattern recognition. Clustering-based Classification (CC) is one of the most effective classification methods that first divides data into several clusters, each cluster then being described by a One-Class Classifier (OCC). Scalability and accuracy are two key advantages of this clustering-enhanced approach. In continuation of this strategy, in this paper, we further propose Spectral Clustering-based Classification (SCC). In contrast to many other clustering algorithms, Spectral Clustering (SC) aims to put the more mutually interconnected data points in one cluster, hence producing output clusters with smoother borders. A simpler border is easier to be described by an OCC, leading to higher accuracy. Application to seven UCI data sets of various nature and size confirms this improved performance in terms of higher accuracy, while keeping scalability property.

  • Added 2 years ago by Anonymous Cursecrafter Boa

  • 0 Comments

unsupervised representation learning with deep convolutional generative adversarial networks

Click to view discussion

Alec Radford, Luke Metz, Soumith Chintala

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.

deep speech: scaling up end-to-end speech recognition

Click to view discussion

Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Ng

We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a "phoneme." Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems.

Suggested to Venues

  • Added 2 years ago by Anonymous Daisybrow Lady

  • 0 Comments

why regularized auto-encoders learn sparse representation?

Click to view discussion

Devansh Arpit, Yingbo Zhou, Hung Ngo, Venu Govindaraju

Although a number of auto-encoder models enforce sparsity explicitly in their learned representation while others don't, there has been little formal analysis on what encourages sparsity in these models in general. Therefore, our objective here is to formally study this general problem for regularized auto-encoders. We show that both regularization and activation function play an important role in encouraging sparsity. We provide sufficient conditions on both these criteria and show that multiple popular models-- like De-noising and Contractive auto-encoder-- and activations-- like Rectified Linear and Sigmoid-- satisfy these conditions; thus explaining sparsity in their learned representation. Our theoretical and empirical analysis together, throws light on the properties of regularization/activation that are conducive to sparsity. As a by-product of the insights gained from our analysis, we also propose a new activation function that overcomes the individual drawbacks of multiple existing activations (in terms of sparsity) and hence produces performance at par (or better) with the best performing activation for all auto-encoder models discussed.

Suggested to Venues

importance weighted autoencoders

Click to view discussion

Yuri Burda, Roger Grosse, Ruslan Salakhutdinov

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It makes two strong assumptions about posterior inference: that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks.

Suggested to Venues

high-dimensional continuous control using generalized advantage estimation

Click to view discussion

John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel

This paper is concerned with developing policy gradient methods that gracefully scale up to challenging problems with high-dimensional state and action spaces. Towards this end, we develop a scheme that uses value functions to substantially reduce the variance of policy gradient estimates, while introducing a tolerable amount of bias. This scheme, which we call generalized advantage estimation (GAE), involves using a discounted sum of temporal difference residuals as an estimate of the advantage function, and can be interpreted as a type of automated cost shaping. It is simple to implement and can be used with a variety of policy gradient methods and value function approximators. Along with this variance-reduction scheme, we use trust region algorithms to optimize the policy and value function, both represented as neural networks. We present experimental results on a number of highly challenging 3D loco- motion tasks, where our approach learns complex gaits for bipedal and quadrupedal simulated robots. We also learn controllers for the biped getting up off the ground. In contrast to prior work that uses hand-crafted low-dimensional policy representations, our neural network policies map directly from raw kinematics to joint torques.

Suggested to Venues

early stopping is nonparametric variational inference

Click to view discussion

Dougal Maclaurin, David Duvenaud, Ryan P. Adams

We show that unconverged stochastic gradient descent can be interpreted as a procedure that samples from a nonparametric variational approximate posterior distribution. This distribution is implicitly defined as the transformation of an initial distribution by a sequence of optimization updates. By tracking the change in entropy over this sequence of transformations during optimization, we form a scalable, unbiased estimate of the variational lower bound on the log marginal likelihood. We can use this bound to optimize hyperparameters instead of using cross-validation. This Bayesian interpretation of SGD suggests improved, overfitting-resistant optimization procedures, and gives a theoretical foundation for popular tricks such as early stopping and ensembling. We investigate the properties of this marginal likelihood estimator on neural network models.

Suggested to Venues

dropout as data augmentation

Click to view discussion

Kishore Konda, Xavier Bouthillier, Roland Memisevic, Pascal Vincent

We show that using dropout in a network can be interpreted as a kind of data augmentation without domain knowledge. We present an approach to projecting the dropout noise within a network back into the input space, thereby generating augmented versions of the training data, and we show that training a deterministic network on the augmented samples yields similar results. We furthermore propose an explanation for the increased sparsity levels that can be observed in networks trained with dropout and show how this is related to data augmentation. Finally, we detail a random dropout noise scheme based on our observations and show that it improves dropout results without adding significant computational cost.

Suggested to Venues

scalable multi-neighborhood learning for convolutional networks

Click to view discussion

Elnaz Barshan, Paul Fieguth, Alexander Wong

In this paper we explore the role of scale for improved feature learning in convolutional networks. We propose multi-neighborhood In this paper we explore the role of scale for improved feature learning in convolutional networks. We propose multi-neighborhood convolutional networks, designed to learn image features at different levels of detail. Utilizing nonlinear scale-space models, the proposed multi-neighborhood model can effectively capture fine-scale image characteristics (i.e., appearance) using a small-size neighborhood, while coarse-scale image structures (i.e., shape) are detected through a larger neighborhood. In addition, we introduce a scalable learning method for the proposed multi-neighborhood architecture and show how one can use an already-trained single-scale network to extract image features at multiple levels of detail. The experimental results demonstrate the superior performance of the proposed multi-scale multi-neighborhood models over their single-scale counterparts without an increase in training cost.

Suggested to Venues

learning context-free grammars: capabilities and limitations of a recurrent neural network with an external stack memory

Click to view discussion

Sreerupa Das, C. Lee Giles, Guo-Zheng Sun

This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with rst, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed.

Suggested to Venues

learning to transduce with unbounded memory

Click to view discussion

Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Phil Blunsom

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.

Suggested to Venues

decoupled deep neural network for semi-supervised semantic segmentation

Click to view discussion

Seunghoon Hong, Hyeonwoo Noh, Bohyung Han

We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.

Suggested to Venues

learning to see by moving

Click to view discussion

Pulkit Agrawal, Joao Carreira, Jitendra Malik

The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using classlabel as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.

Suggested to Venues

deep recurrent q-learning for partially observable mdps

Click to view discussion

Matthew Hausknecht, Peter Stone

Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting Deep Recurrent Q-Network (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.

Suggested to Venues

neural transformation machine: a new architecture for sequence-to-sequence learning

Click to view discussion

Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu

We propose Neural Transformation Machine (NTram), a novel architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e.g., a Chinese sentence) to the final output sequence (e.g., translation to English). Inspired by the recent Neural Turing Machines [8], we store the intermediate representations in stacked layers of memories, and use read-write operations on the memories to realize the nonlinear transformations of those representations. Those transformations are designed in advance but the parameters are learned from data. Through layer-by-layer transformations, NTram can model complicated relations necessary for applications such as machine translation between distant languages. The architecture can be trained with normal back-propagation on parallel texts, and the learning can be easily scaled up to a large corpus. NTram is broad enough to subsume the state-of-the-art neural translation model in [2] as its special case, while significantly improves upon the model with its deeper architecture. Remarkably, NTram, being purely neural network-based, can achieve performance comparable to the traditional phrase-based machine translation system (Moses) with a small vocabulary and a modest parameter size.

Suggested to Venues

character-aware neural language models

Click to view discussion

Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush

We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Czech, German, French, Spanish, Russian), the model consistently outperforms a Kneser-Ney baseline (by 30-35%) and a word-level LSTM baseline (by 15-25%), again with far fewer parameters. Our results suggest that on many languages, character inputs are sufficient for language modeling.

Suggested to Venues

training recurrent networks online without backtracking

Click to view discussion

Yann Ollivier, Guillaume Charpiat

We introduce the "NoBackTrack" algorithm to train the parameters of dynamical systems such as recurrent neural networks. This algorithm works in an online, memoryless setting, thus requiring no backpropagation through time, and is scalable, avoiding the large computational and memory cost of maintaining the full gradient of the current state with respect to the parameters. The algorithm essentially maintains, at each time, a single search direction in parameter space. The evolution of this search direction is partly stochastic and is constructed in such a way to provide, at every time, an unbiased random estimate of the gradient of the loss function with respect to the parameters. Because the gradient estimate is unbiased, on average over time the parameter is updated as it should. The resulting gradient estimate can then be fed to a lightweight Kalman-like filter to yield an improved algorithm. For recurrent neural networks, the resulting algorithms scale linearly with the number of parameters. Preliminary tests on a simple task show that the stochastic approximation of the gradient introduced in the algorithm does not seem to introduce too much noise in the trajectory, compared to maintaining the full gradient, and confirm the good performance and scalability of the Kalman-like version of NoBackTrack.

Suggested to Venues

beating the perils of non-convexity: guaranteed training of neural networks using tensor methods

Click to view discussion

Majid Janzamin, Hanie Sedghi, Anima Anandkumar

Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for training a two-layer neural network. We prove efficient risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for generalizability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we have a computationally efficient method with guaranteed risk bounds for training neural networks with general non-linear activations.

Suggested to Venues

pointer networks

Click to view discussion

Oriol Vinyals, Meire Fortunato, Navdeep Jaitly

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

Suggested to Venues

neural turing machines

Click to view discussion

Alex Graves, Greg Wayne, Ivo Danihelka

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

Suggested to Venues

a neural algorithm of artistic style

Click to view discussion

Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.

Suggested to Venues

  • Added 2 years ago by Roland Memisevic

  • 1 recommenders

  • 1 Comments

ask me anything: dynamic memory networks for natural language processing

Click to view discussion

Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Mohit Iyyer, Ishaan Gulrajani, Richard Socher

Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a unified neural network framework which processes input sequences and questions, forms semantic and episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state of the art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), sequence modeling for part of speech tagging (WSJ-PTB), and text classification for sentiment analysis (Stanford Sentiment Treebank). The model relies exclusively on trained word vector representations and requires no string matching or manually engineered features.

Suggested to Venues

on pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

Click to view discussion

Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek

Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

Suggested to Venues

stochasticnet: forming deep neural networks via stochastic connectivity

Click to view discussion

Mohammad Javad Shafiee, Parthipan Siva, Alexander Wong

Deep neural networks is a branch in machine learning that has seen a meteoric rise in popularity due to its powerful abilities to represent and model high-level abstractions in highly complex data. One area in deep neural networks that is ripe for exploration is neural connectivity formation. A pivotal study on the brain tissue of rats found that synaptic formation for specific functional connectivity in neocortical neural microcircuits can be surprisingly well modeled and predicted as a random formation. Motivated by this intriguing finding, we introduce the concept of StochasticNet, where deep neural networks are formed via stochastic connectivity between neurons. Such stochastic synaptic formations in a deep neural network architecture can potentially allow for efficient utilization of neurons for performing specific tasks. To evaluate the feasibility of such a deep neural network architecture, we train a StochasticNet using three image datasets. Experimental results show that a StochasticNet can be formed that provides comparable accuracy and reduced overfitting when compared to conventional deep neural networks with more than two times the number of neural connections.

Suggested to Venues

  • Added 2 years ago by Mohammad Javad Shafiee

  • 0 Comments

nonlinear hebbian learning as a universal principle in unsupervised feature learning

Click to view discussion

Carlos Brito, Wulfram Gerstner

Successful representation learning models appear to develop strikingly similar features to each other, raising the prospect of a fundamental underlying principle. We show that nonlinear Hebbian learning gives a parsimonious account for feature learning, underlying models such as sparse coding, neural networks and independent component analysis. For all datasets considered, the most hyper-Gaussian features are learned irrespective of the effective nonlinearity of the model. Particularly, it explains why Gabor filters are ubiquitously developed for image inputs. Our results reveal that feature learning is robust to normative assumptions, exposing a large class of models with comparable learning properties.

Suggested to Venues

finding function in form: compositional character models for open vocabulary word representation

Click to view discussion

Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W. Black, Isabel Trancoso

We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).

Suggested to Venues

vqa: visual question answering

Click to view discussion

Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh

We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring many real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 100,000's of images and questions and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance.

Suggested to Venues

reinforcement learning neural turing machines

Click to view discussion

Wojciech Zaremba, Ilya Sutskever

The expressive power of a machine learning model is closely related to the number of sequential computational steps it can learn. For example, Deep Neural Networks have been more successful than shallow networks because they can perform a greater number of sequential computational steps (each highly parallel). The Neural Turing Machine (NTM) is a model that can compactly express an even greater number of sequential computational steps, so it is even more powerful than a DNN. Its memory addressing operations are designed to be differentiable; thus the NTM can be trained with backpropagation. While differentiable memory is relatively easy to implement and train, it necessitates accessing the entire memory content at each computational step. This makes it difficult to implement a fast NTM. In this work, we use the Reinforce algorithm to learn where to access the memory, while using backpropagation to learn what to write to the memory. We call this model the RL-NTM. Reinforce allows our model to access a constant number of memory cells at each computational step, so its implementation can be faster. The RL-NTM is the first model that can, in principle, learn programs of unbounded running time. We successfully trained the RL-NTM to solve a number of algorithmic tasks that are simpler than the ones solvable by the fully differentiable NTM. As the RL-NTM is a fairly intricate model, we needed a method for verifying the correctness of our implementation. To do so, we developed a simple technique for numerically checking arbitrary implementations of models that use Reinforce, which may be of independent interest.

Suggested to Venues

end-to-end memory networks

Click to view discussion

Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus

We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates slightly better performance than RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.

Suggested to Venues

towards neural network-based reasoning

Click to view discussion

Baolin Peng, Zhengdong Lu, Hang Li, Kam-Fai Wong

We propose Neural Reasoner, a framework for neural network-based reasoning over natural language sentences. Given a question, Neural Reasoner can infer over multiple supporting facts and find an answer to the question in specific forms. Neural Reasoner has 1) a specific interaction-pooling mechanism, allowing it to examine multiple facts, and 2) a deep architecture, allowing it to model the complicated logical relations in reasoning tasks. Assuming no particular structure exists in the question and facts, Neural Reasoner is able to accommodate different types of reasoning and different forms of language expressions. Despite the model complexity, Neural Reasoner can still be trained effectively in an end-to-end manner. Our empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. For example, it improves the accuracy on Path Finding(10K) from 33.4% [6] to over 98%.

Suggested to Venues

bayesian dark knowledge

Click to view discussion

Anoop Korattikara, Vivek Rathod, Kevin Murphy, Max Welling

We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities, e.g., for applications involving bandits or active learning. One simple approach to this is to use online Monte Carlo methods, such as SGLD (stochastic gradient Langevin dynamics). Unfortunately, such a method needs to store many copies of the parameters (which wastes memory), and needs to make predictions using many versions of the model (which wastes time). We describe a method for "distilling" a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural network. We compare to two very recent approaches to Bayesian neural networks, namely an approach based on expectation propagation [Hernandez-Lobato and Adams, 2015] and an approach based on variational Bayes [Blundell et al., 2015]. Our method performs better than both of these, is much simpler to implement, and uses less computation at test time.

Suggested to Venues

  • Added 2 years ago by Hugo Larochelle

  • 1 recommenders

  • 1 Comments

dropout as a bayesian approximation: representing model uncertainty in deep learning

Click to view discussion

Yarin Gal, Zoubin Ghahramani

Deep learning tools have recently gained much attention in applied machine learning. However such tools for regression and classification do not allow us to capture model uncertainty. Bayesian models offer us the ability to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in multilayer perceptron models (MLPs) can be interpreted as a Bayesian approximation. Results are obtained for modelling uncertainty for dropout MLP models - extracting information that has been thrown away so far, from existing models. This mitigates the problem of representing uncertainty in deep learning without sacrificing computational performance or test accuracy. We perform an exploratory study of the dropout uncertainty properties. Various network architectures and non-linearities are assessed on tasks of extrapolation, interpolation, and classification. We show that model uncertainty is important for classification tasks using MNIST as an example, and use the model's uncertainty in a Bayesian pipeline, with deep reinforcement learning as a concrete example.

Suggested to Venues

towards ai-complete question answering: a set of prerequisite toy tasks

Click to view discussion

Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.

Suggested to Venues

global optimality in tensor factorization, deep learning, and beyond

Click to view discussion

Benjamin D. Haeffele, Rene Vidal

Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation. Here we build on ideas from convex relaxations of matrix factorizations and present a very general framework which allows for the analysis of a wide range of non-convex factorization problems - including matrix factorization, tensor factorization, and deep neural network training formulations. We derive sufficient conditions to guarantee that a local minimum of the non-convex optimization problem is a global minimum and show that if the size of the factorized variables is large enough then from any initialization it is possible to find a global minimizer using a purely local descent algorithm. Our framework also provides a partial theoretical justification for the increasingly common use of Rectified Linear Units (ReLUs) in deep neural networks and offers guidance on deep network architectures and regularization strategies to facilitate efficient optimization.

Suggested to Venues

distributional smoothing with virtual adversarial training

Click to view discussion

Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KL-distance based measure of the model's robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive in the input space. We call the training with LDS regularization the virtual adversarial training (VAT). Our technique resembles the adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the label information. The technique is therefore applicable even to semi-supervised learning. When we applied our technique to the classification task of the permutation invariant MNIST dataset, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method that uses a highly advanced generative model.

Suggested to Venues

incentivizing exploration in reinforcement learning with deep predictive models

Click to view discussion

Bradly C. Stadie, Sergey Levine, Pieter Abbeel

Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. To achieve more efficient exploration, we develop a method for assigning exploration bonuses based on a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. We demonstrate our approach on the task of learning to play Atari games from raw pixel inputs. In this domain, our method offers substantial improvements in exploration efficiency when compared with the standard epsilon greedy approach. As a result of our improved exploration strategy, we are able to achieve state-of-the-art results on several games that pose a major challenge for prior methods.

Suggested to Venues

traversing knowledge graphs in vector space

Click to view discussion

Kelvin Guu, John Miller, Percy Liang

Path queries on a knowledge graph can be used to answer compositional questions such as "What languages are spoken by people living in Lisbon?". However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new "compositional" training objective, which dramatically improves all models' ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.

Suggested to Venues

grid long short-term memory

Click to view discussion

Nal Kalchbrenner, Ivo Danihelka, Alex Graves

This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. It therefore provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as integer addition and determining the parity of random binary vectors. It is able to solve these problems for 15-digit integers and 250-bit vectors respectively. We then give results for three empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. We also observe that a two-dimensional translation model based on Grid LSTM outperforms a phrase-based reference system on a Chinese-to-English translation task, and that 3D Grid LSTM yields a near state-of-the-art error rate of 0.32% on MNIST.

Suggested to Venues

scalable bayesian optimization using deep neural networks

Click to view discussion

Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat, Ryan P. Adams

Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations. It relies on querying a distribution over functions defined by a relatively cheap surrogate model. An accurate model for this distribution over functions is critical to the effectiveness of the approach, and is typically fit using Gaussian processes (GPs). However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization. In this work, we explore the use of neural networks as an alternative to GPs to model distributions over functions. We show that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks using convolutional networks, and image caption generation using neural language models.

Suggested to Venues

skip-thought vectors

Click to view discussion

Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, Sanja Fidler

We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. We will make our encoder publicly available.

Suggested to Venues

end-to-end training of deep visuomotor policies

Click to view discussion

Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel

Policy search methods based on reinforcement learning and optimal control can allow robots to automatically learn a wide range of tasks. However, practical applications of policy search tend to require the policy to be supported by hand-engineered components for perception, state estimation, and low-level control. We propose a method for learning policies that map raw, low-level observations, consisting of joint angles and camera images, directly to the torques at the robot’s joints. The policies are represented as deep convolutional neural networks (CNNs) with 92,000 parameters. The high dimensionality of such policies poses a tremendous challenge for policy search. To address this challenge, we develop a sensorimotor guided policy search method that can handle high-dimensional policies and partially observed tasks. We use BADMM to decompose policy search into an optimal control phase and supervised learning phase, allowing CNN policies to be trained with standard supervised learning techniques. This method can learn a number of manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a clothes rack.

Suggested to Venues

optimizing neural networks with kronecker-factored approximate curvature

Click to view discussion

James Martens, Roger Grosse

We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.

Suggested to Venues

a probabilistic theory of deep learning

Click to view discussion

Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, as well as a principled route to their improvement.

Suggested to Venues

deep learning with limited numerical precision

Click to view discussion

Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan

Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.

Suggested to Venues

deep unsupervised learning using nonequilibrium thermodynamics

Click to view discussion

Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli

A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.

Suggested to Venues

  • Added 2 years ago by Elman Mansimov

  • 1 recommenders

  • 1 Comments

learning word meta-embeddings by using ensembles of embedding sets

Click to view discussion

Wenpeng Yin, Hinrich Schuetze

Word embeddings -- distributed representations for words -- in deep learning are beneficial for many tasks in Natural Language Processing (NLP). However, different embedding sets vary greatly in quality and characteristics of the captured semantics. Instead of relying on a more advanced algorithm for embedding learning, this paper proposes an ensemble approach of combining different public embedding sets with the aim of learning meta-embeddings. Experiments on word similarity and analogy tasks and on part-of-speech (POS) tagging show better performance of meta-embeddings compared to individual embedding sets. One advantage of meta-embeddings is that they have increased coverage of the vocabulary. We will release our meta-embeddings publicly.

Suggested to Venues

deep generative image models using a laplacian pyramid of adversarial networks

Click to view discussion

Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus

In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.

Suggested to Venues

large-scale simple question answering with memory networks

Click to view discussion

Antoine Bordes, Nicolas Usunier, Sumit Chopra, Jason Weston

Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.

Suggested to Venues

teaching machines to read and comprehend

Click to view discussion

Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom

Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.

Suggested to Venues

highway networks

Click to view discussion

Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.

Suggested to Venues

semi-supervised learning with ladder network

Click to view discussion

Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko

We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pretraining. Our work builds on top of the Ladder network proposed by Valpola (2015) which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in various tasks: MNIST and CIFAR-10 classification in a semi-supervised setting and permutation invariant MNIST in both semi-supervised and full-labels setting.

Suggested to Venues

listen, attend and spell

Click to view discussion

William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals

We present Listen, Attend and Spell (LAS), a neural network that learns to transcribe speech utterances to characters. Unlike traditional DNN-HMM models, this model learns all the components of a speech recognizer jointly. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits characters as outputs. The network produces character sequences without making any independence assumptions between the characters. This is the key improvement of LAS over previous end-to-end CTC models. On the Google Voice Search task, LAS achieves a word error rate (WER) of 14.2% without a dictionary or a language model, and 11.2% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 10.9%.

Suggested to Venues

  • Added 2 years ago by Samy Bengio

  • 1 recommenders

  • 2 Comments

visualizing and understanding recurrent networks

Click to view discussion

Andrej Karpathy, Justin Johnson, Fei-Fei Li

Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing a comprehensive analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, an extensive analysis with finite horizon n-gram models suggest that these dependencies are actively discovered and utilized by the networks. Finally, we provide detailed error analysis that suggests areas for further study.

Suggested to Venues

  • Added 2 years ago by Edward Grefenstette

  • 0 Comments

lstm: a search space odyssey

Click to view discussion

Klaus Greff, Rupesh Kumar Srivastava, Jan Koutník, Bas R. Steunebrink, Jürgen Schmidhuber

Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search and their importance was assessed using the powerful fANOVA framework. In total, we summarize the results of 5400 experimental runs (about 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.

Suggested to Venues

  • Added 2 years ago by Edward Grefenstette

  • 0 Comments

transition-based dependency parsing with stack long short-term memory

Click to view discussion

Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A. Smith

We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance.

Suggested to Venues

  • Added 2 years ago by Edward Grefenstette

  • 0 Comments

stochastic backpropagation and approximate inference in deep generative models

Click to view discussion

Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.

Suggested to Venues

natural neural networks

Click to view discussion

Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray Kavukcuoglu

We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.

Suggested to Venues

  • Added 2 years ago by Marc'Aurelio Ranzato

  • 3 Comments

spatial transformer networks

Click to view discussion

Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu

Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.

Suggested to Venues

  • Added 2 years ago by Marc'Aurelio Ranzato

  • 1 Comments

batch normalization: accelerating deep network training by reducing internal covariate shift

Click to view discussion

Sergey Ioffe, Christian Szegedy

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

Suggested to Venues

  • Added 2 years ago by Yoshua Bengio

  • 3 recommenders

  • 4 Comments

Organizers for Deep Learning Symposium @ NIPS'2015

  • Yoshua Bengio

  • Marc'Aurelio Ranzato

  • Honglak Lee

  • Max Welling

  • Andrew Ng

Committee for Deep Learning Symposium @ NIPS'2015

  • Aaron Courville

  • Alan Yuille

  • Andrew Zisserman

  • Antoine Bordes

  • Brian Kingsbury

  • Chris Williams

  • Francis Bach

  • Hugo Larochelle

  • Ian Goodfellow

  • Ilya Sutskever

  • Jason Weston

  • Kyunghyun Cho

  • Nando de Freitas

  • Phil Blunsom

  • Rich Sutton

  • Richard Zemel

  • Roland Memisevic

  • Ronan Collobert

  • Ruslan Salakhutdinov

  • Ryan Adams

  • Samy Bengio

  • Tomas Mikolov

  • Trevor Darrell