Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add method for reversing GAN to get latent representation for images #4

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

avlaskin
Copy link

Add method for reversing GAN to get latent representation for images. This can help with future utilisation of the generator network. Also this pr removes some trailing space.

@veqtor
Copy link

veqtor commented Aug 13, 2018

Nice!

@leweohlsen
Copy link

This is really useful. However, the latents being returned are all nan-values. I am working with half-precision floats. Is anyone else encountering the same problem?

@Wuvist
Copy link

Wuvist commented Feb 15, 2019

@avlaskin I tried to use the reverse_gan_for_etalons method with:

latents = np.random.RandomState(1).randn(1000, *Gs.input_shapes[0][1:]) # 1000 random latents
latents = latents[[0]] # hand-picked top-1
labels = np.zeros([latents.shape[0]] + Gs.input_shapes[1][1:])
img = load_image("test.png")
Gs.reverse_gan_for_etalons(latents, labels, img)

However, keep getting the error:

InvalidArgumentError (see above for traceback): Incompatible shapes: [2] vs. [0]

Appearently, it happens to at the line

gradient = tf.gradients(loss, input_latents)

The tensor input_latents seems wrong. Is it because I shouldn't construct latents from random state?

Thank you.

@yjs0704
Copy link

yjs0704 commented Mar 7, 2019

This is really useful. However, the latents being returned are all nan-values. I am working with half-precision floats. Is anyone else encountering the same problem?

I got the same problem. It turned out that all my g values are greater than the initial c_min (1e9). I have changed it to 1e12 and obtained non-nan outputs but the actually generated images from the recovered latent space representations do not quite match my original inputs.

@dmenig
Copy link

dmenig commented Jul 20, 2019

Thanks for this work. I was also getting nan. I was trying to reconstitue an image with an fp16 trained model on a custom dataset. i jsut put a loss = tf.reduce_sum(tf.div(tf.pow(out_expr[0] - psy, 2), 1000.)) instead of the loss you wrote, changed c_min to 1e12 and it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants