Skip to content
forked from HIPS/hypergrad

Exploring differentiation w.r.t hyperparameters

Notifications You must be signed in to change notification settings

daitao/hypergrad

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gradient-based Optimization of Hyperparameters through Reversible Learning

Source code for http://arxiv.org/abs/1502.03492

Abstract:

Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.

Authors: Dougal Maclaurin, David Duvenaud, and Ryan P. Adams

Feel free to email us with any questions at ([email protected]), ([email protected]).

For a look at some directions that didn't pan out, take a look at our early research log.

About

Exploring differentiation w.r.t hyperparameters

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 70.4%
  • TeX 25.8%
  • PostScript 2.7%
  • HTML 1.1%