Skip to content

Latest commit

 

History

History
182 lines (165 loc) · 33.2 KB

2015.md

File metadata and controls

182 lines (165 loc) · 33.2 KB

2015

Deep learning

  • [ADAM] Adam: A Method for Stochastic Optimization. [[pdf]](docs/2015/ADAM- A METHOD FOR STOCHASTIC OPTIMIZATION(2015).pdf) [url] ⭐
  • A Diversity-Promoting Objective Function for Neural Conversation Models. [[pdf](docs/2015/A Diversity-Promoting Objective Function for Neural Conversation Models.pdf)] [url]
  • A Neural Conversational Model. [[pdf]](docs/2015/A Neural Conversational Model.pdf) [url]
  • A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. [[pdf]](docs/2015/A Neural Network Approach to Context-Sensitive Generation of Conversational Responses.pdf) [url]
  • A Roadmap towards Machine Intelligence. [url]
  • A Survey- Time Travel in Deep Learning Space- An Introduction to Deep Learning Models and How Deep Learning Models Evolved from the Initial Ideas. [[pdf]](docs/2015/A Survey- Time Travel in Deep Learning Space- An Introduction to Deep Learning Models and How Deep Learning Models Evolved from the Initial Ideas.pdf) [url]
  • An Empirical Exploration of Recurrent Network Architectures. [[pdf]](docs/2015/An Empirical Exploration of Recurrent Network Architectures(2015).pdf) [url] ⭐
  • [Batch Normalization] Batch Normalization- Accelerating Deep Network Training by Reducing Internal Covariate Shift. [arxiv] ⭐
  • Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. [url]
  • Correlational Neural Networks. [url]
  • Deconstructing the Ladder Network Architecture. [url]
  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. [arxiv]
  • Deep Knowledge Tracing. [url]
  • Deep learning. [nature] ⭐
  • Distilling the Knowledge in a Neural Network. [[pdf]](docs/2015/Distilling the Knowledge in a Neural Network.pdf) [url] ⭐
  • Dropout as a Bayesian Approximation- Representing Model Uncertainty in Deep Learning. [url]
  • Effective LSTMs for Target-Dependent Sentiment Classification. [arxiv]
  • Human-level concept learning through probabilistic program induction [science] ⭐
  • Learning Simple Algorithms from Examples. [[pdf]](docs/2015/Learning Simple Algorithms from Examples.pdf) [url]
  • Learning to Transduce with Unbounded Memory. [url]
  • LSTM A Search Space Odyssey. [[pdf]](docs/2015/LSTM A Search Space Odyssey(2015).pdf) [url] ⭐
  • LSTM-based Deep Learning Models for non-factoid answer selection. [url]
  • Neural GPUs Learn Algorithms. [[pdf]](docs/2015/Neural GPUs Learn Algorithms.pdf) [arxiv] [tensorflow] ⭐
  • Neural Programmer- Inducing Latent Programs with Gradient Descent. [url]
  • Pointer Networks. [arxiv] [tensorflow] ⭐
  • Poker-CNN- A Pattern Learning Strategy for Making Draws and Bets in Poker Games. [[pdf]](docs/2015/Poker-CNN- A Pattern Learning Strategy for Making Draws and Bets in Poker Games.pdf) [url]
  • Policy distillation. [arxiv] ⭐
  • Regularizing RNNs by Stabilizing Activations. [url]
  • ReNet- A Recurrent Neural Network Based Alternative to Convolutional Networks. [[pdf]](docs/2015/ReNet- A Recurrent Neural Network Based Alternative to Convolutional Networks.pdf) [url]
  • Semi-Supervised Learning with Ladder Networks. [url] ⭐
  • Session-based Recommendations with Recurrent Neural Networks. [url]
  • Skip-Thought Vectors. [[pdf]](docs/2015/Skip-Thought Vectors.pdf) [url] ⭐
  • Training Very Deep Networks. [[pdf]](docs/2015/Training Very Deep Networks.pdf) [url] ⭐
  • Tree-structured composition in neural networks without tree-structured architectures. [[pdf]](docs/2015/Tree-structured composition in neural networks without tree-structured architectures.pdf) [url]

Computer vision

  • A Neural Algorithm of Artistic Style. [arxiv] [code] ⭐
  • [ResNet] Deep Residual Learning for Image Recognition. [[pdf]](docs/2015/Deep Residual Learning for Image Recognition.pdf) [arxiv] [tensorflow] ⭐
  • [ResNet] Delving Deep into Rectifiers- Surpassing Human-Level Performance on ImageNet Classification. [[pdf]](docs/2015/Delving Deep into Rectifiers- Surpassing Human-Level Performance on ImageNet Classification(2015).pdf) [url] ⭐
  • [FaceNet] FaceNet: A Unified Embedding for Face Recognition and Clustering. [arxiv] [tensorflow] ⭐
  • [Faster R-CNN] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. [arxiv] [code] ⭐
  • [Fast R-CNN] Fast R-CNN [arxiv] [code] ⭐
  • Hierarchical Recurrent Neural Network for Skeleton Based Action Recognition. [url] ⭐
  • Inceptionism: Going Deeper into Neural Networks. [googleblog] ⭐
  • Inside-Outside Net- Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks. [[pdf]](docs/2015/Inside-Outside Net- Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks.pdf) [url]
  • ReSeg- A Recurrent Neural Network for Object Segmentation. [[pdf]](docs/2015/ReSeg- A Recurrent Neural Network for Object Segmentation.pdf) [url] ⭐
  • Rethinking the Inception Architecture for Computer Vision. [[pdf]](docs/2015/Rethinking the Inception Architecture for Computer Vision.pdf) [arxiv] [tensorflow] ⭐
  • [FCNT] Visual Tracking with fully Convolutional Networks. [cuhk] [code] ⭐
  • You Only Look Once: Unified, Real-Time Object Detection. [arxiv]

Generative learning

Attention and memory

  • [ABCNN] ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs. [arxiv] ⭐
  • Action Recognition using Visual Attention. [arxiv] [code]
  • Ask Me Anything- Dynamic Memory Networks for Natural Language Processing. [url] ⭐
  • Attention-Based Models for Speech Recognition. [arxiv]
  • Attention with Intention for a Neural Network Conversation Model. [url]
  • [VAE with attention] DRAW: A Recurrent Neural Network For Image Generation. [arxiv] [code] ⭐
  • Show, Attend and Tell- Neural Image Caption Generation with Visual Attention. [url] ⭐
  • Agreement-based Joint Training for Bidirectional Attention-based Neural Machine Translation. [arxiv]
  • Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. [arxiv] ⭐
  • A Neural Attention Model for Sentence Summarization. [url] ⭐
  • [Global And Local Attention] Effective Approaches to Attention-based Neural Machine Translation. [arxiv] [code] [tensorflow] ⭐
  • End-to-End Attention-based Large Vocabulary Speech Recognition. [url]
  • End-To-End Memory Networks. [arxiv] ⭐
  • Grammar as a Foreign Language. [arxiv] ⭐
  • Large-scale Simple Question Answering with Memory Networks. [arxiv]
  • Learning Deep Neural Network Policies with Continuous Memory States. [arxiv]
  • [LAS] Listen, Attend and Spell. [[pdf]](docs/2015/Listen, Attend and Spell.pdf) [arxiv] ⭐
  • Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences. [arxiv] ⭐
  • Memory-based control with recurrent neural networks. [arxiv]
  • Not All Contexts Are Created Equal: Better Word Representations with Variable Attention. [pdf]
  • Reinforcement Learning Neural Turing Machines. [arxiv] ⭐
  • [Soft And Hard Attention] Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. [arxiv] [code] [tensorflow] ⭐
  • Teaching Machines to Read and Comprehend. [arxiv] ⭐
  • Video Description Generation Incorporating Spatio-Temporal Features and a Soft-Attention Mechanism. [arxiv]

Transfer learning

  • Heterogeneous defect prediction. [pdf]
  • Learning Transferred Weights From Co-Occurrence Data for Heterogeneous Transfer Learning. [pdf]
  • Net2Net-Accelerating Learning via Knowledge Transfer. [arxiv] ⭐
  • Spatial Transformer Networks. [arxiv] [tensorflow] ⭐
  • Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping. [arxiv]
  • Transfer learning using computational intelligence: A survey. [url]
  • Transfer learning used to analyze the dynamic evolution of the dust aerosol. [url]
  • Transferring Rich Feature Hierarchies for Robust Visual Tracking. [arxiv] ⭐

One/zero-shot learning

  • Siamese Neural Networks for One-shot Image Recognition. [pdf]

Deep reinforcement learning

  • ADAAPT: A Deep Architecture for Adaptive Policy Transfer from Multiple Sources. [arxiv]
  • Action-Conditional Video Prediction using Deep Networks in Atari Games. [arxiv] ⭐
  • Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. [arxiv] ⭐
  • [DDPG] Continuous control with deep reinforcement learning. [arxiv] ⭐
  • [NAF] Continuous Deep Q-Learning with Model-based Acceleration. [arxiv] ⭐
  • Dueling Network Architectures for Deep Reinforcement Learning. [arxiv] ⭐
  • Deep Reinforcement Learning with an Action Space Defined by Natural Language. [arxiv]
  • Deep Reinforcement Learning with Double Q-learning. [arxiv] ⭐
  • Deep Recurrent Q-Learning for Partially Observable MDPs. [arxiv] ⭐
  • DeepMPC: Learning Deep Latent Features for Model Predictive Control. [url]
  • Deterministic Policy Gradient Algorithms. [url] ⭐
  • Dueling Network Architectures for Deep Reinforcement Learning. [arxiv]
  • End-to-End Training of Deep Visuomotor Policies. [arxiv] ⭐
  • Giraffe: Using Deep Reinforcement Learning to Play Chess. [arxiv]
  • Generating Text with Deep Reinforcement Learning. [arxiv]
  • How to Discount Deep Reinforcement Learning: Towards New Dynamic Strategies. [arxiv]
  • Human-level control through deep reinforcement learning. [nature] ⭐
  • Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. [arxiv] ⭐
  • Learning Simple Algorithms from Examples. [arxiv]
  • Language Understanding for Text-based Games Using Deep Reinforcement Learning. [url] ⭐
  • Learning Continuous Control Policies by Stochastic Value Gradients. [url] ⭐
  • Multiagent Cooperation and Competition with Deep Reinforcement Learning. [arxiv]
  • Maximum Entropy Deep Inverse Reinforcement Learning. [arxiv]
  • Massively Parallel Methods for Deep Reinforcement Learning. [url] ⭐
  • On Learning to Think- Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models. [arxiv]
  • Playing Atari with Deep Reinforcement Learning. [arxiv]
  • Recurrent Reinforcement Learning: A Hybrid Approach. [arxiv]
  • Strategic Dialogue Management via Deep Reinforcement Learning. [arxiv]
  • Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control. [arxiv]
  • Trust Region Policy Optimization. [url] ⭐
  • Universal Value Function Approximators. [url]
  • Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning. [arxiv]

Natural language process

  • A Primer on Neural Network Models for Natural Language Processing. [url]
  • A Unified Tagging Solution- Bidirectional LSTM Recurrent Neural Network with Word Embedding. [url]
  • Alternative structures for character-level RNNs. [[pdf]](docs/2015/Alternative structures for character-level RNNs.pdf) [url]
  • Ask Me Anything- Dynamic Memory Networks for Natural Language Processing. [url] ⭐
  • BlackOut- Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies. [url]
  • Character-Aware Neural Language Models. [url] ⭐
  • Character-level Convolutional Networks for Text Classification. [[pdf]](docs/2015/Character-level Convolutional Networks for Text Classification.pdf) [url]
  • Deep Speech 2- End-to-End Speech Recognition in English and Mandarin. [[pdf]](docs/2015/Deep Speech 2- End-to-End Speech Recognition in English and Mandarin.pdf) [url] ⭐
  • Distributed Representations of Sentences and Documents. [[pdf]](docs/2015/Distributed Representations of Sentences and Documents.pdf) [url] ⭐
  • Dynamic Capacity Networks. [[pdf]](docs/2015/Dynamic Capacity Networks.pdf) [url]
  • Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. [[pdf]](docs/2015/Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs.pdf) [url]
  • Larger-Context Language Modeling. [[pdf]](docs/2015/Larger-Context Language Modeling.pdf) [url]
  • Multi-task Sequence to Sequence Learning. [[pdf]](docs/2015/Multi-task Sequence to Sequence Learning.pdf) [url]
  • Natural Language Understanding with Distributed Representation. [[pdf]](docs/2015/Natural Language Understanding with Distributed Representation.pdf) [url]
  • Neural Machine Translation of Rare Words with Subword Units. [arxiv]
  • Neural Responding Machine for Short-Text Conversation. [url]
  • Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network. [[pdf]](docs/2015/Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network.pdf) [url]
  • Reading Scene Text in Deep Convolutional Sequences. [url]
  • Recurrent Convolutional Neural Networks for Text Classification. [pdf]
  • Semi-supervised Sequence Learning. [[pdf]](docs/2015/Semi-supervised Sequence Learning.pdf) [url]
  • Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. [arxiv]
  • sense2vec - A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings. [url]
  • Sequence Level Training with Recurrent Neural Networks. [[pdf]](docs/2015/Sequence Level Training with Recurrent Neural Networks.pdf) [url]
  • Strategies for Training Large Vocabulary Neural Language Models. [[pdf]](docs/2015/Strategies for Training Large Vocabulary Neural Language Models.pdf) [url]
  • Towards Universal Paraphrastic Sentence Embeddings. [url]
  • Visualizing and Understanding Neural Models in NLP. [url]