Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any memory efficient DenseNet implementation using Tensorflow #36

Open
ybsave opened this issue Oct 17, 2017 · 5 comments
Open

Comments

@ybsave
Copy link

ybsave commented Oct 17, 2017

So far, I cannot find any memory efficient implementation DenseNet implementation on Tensorflow. In the Torch codes, there are explicit assignment of shared memory. Would you please provide some hints on how to implement this on Tensorflow? Thank you.

@liuzhuang13
Copy link
Owner

Sorry I'm not quite sure how to do this in tensorflow.
@taineleau Can you help on this? Thanks!

@taineleau-zz
Copy link
Contributor

Hi, please first check our technical report, which should give you enough knowledge to implement a memory-efficient DenseNet.
Basically, if you want to implement the memory-efficient version on a NN framework, this framework should allow you to assign the output of a specific operation manually (i.e., you can manually malloc the memory for the output). PyTorch and Caffe support this operation and MXNet partially support this. However, I am not familiar with Tensorflow so I am not sure how much workload it would be if it does not support the malloc things.

@ahundt
Copy link

ahundt commented Jan 10, 2018

Note I've got an outstanding feature request for the necessary operations on tensorflow itself at tensorflow/tensorflow#12948.

@taineleau-zz
Copy link
Contributor

@ahundt good job! thanks for helping make a feature request on TF.

@joeyearsley
Copy link

https://github.com/joeyearsley/efficient_densenet_tensorflow

Made using gradient checkpointing like the gpleiss repo does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants