Giter Club home page Giter Club logo

Comments (28)

igul222 avatar igul222 commented on July 2, 2024 4

Here are some differences I found between your implementation and ours which might be responsible:

  • Gan.py#141: You train the critic for 100 iters every 500 steps. We don't do this, and it's probably responsible for the spikes in the loss curve. Try removing it.
  • Gan.py#61: It looks like self.images and self.fake_images both have shape [self.batch_size, 64, 64, self.channel]. In this case, alpha should have shape [self.batch_size, 1, 1, 1], and also reduction_indices on line 67 should be [1,2,3].
  • Because you're not using any normalization, you might want to check your weight initializations. We use the initialization in https://arxiv.org/abs/1502.01852 everywhere. Alternately, you might want to just use normalization: in our default LSUN implementation we use batch normalization in the generator and layer normalization in the critic.

Hope this helps!

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024 1

the critic loss should be negative, because the critic loss means the negative of the divergence between the real samples distribution and fake samples distribution. You should read the wgan-gp paper for more details. And g_loss= - c_loss = - D(fake_image) + D(real_image), but the gradient of D(real_images) will not affect g network, so, g_loss= -D(fake_image)

from improved_wgan_training.

igul222 avatar igul222 commented on July 2, 2024

That code looks correct. It's hard to say without seeing the rest of the code, but if you point me to the repo I can try and debug.

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

This repo , https://github.com/zhangqianhui/wgan-gp-debug

Thank you !

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

The sample of gerneration
in epoch 4 , iter= 12601

https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_04_12601.png

in epoch 6, iter= 20401

https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_06_20401.png

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

Could you share the learning curve? (I.e. negative of the critic's loss)

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

the curve:
https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/curve.png

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

That doesn't look good. @igul222 did you ever see something like that?

Could you share the full code?

Best :)
Martin

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@martinarjovsky Whose code ?

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

Yours!

from improved_wgan_training.

wchen342 avatar wchen342 commented on July 2, 2024

Don't know whether it is related but, in my experiments of wgan-gp the loss of G becomes negative, which is different from original wgan in which loss of G is generally positive. Is that normal?

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

https://github.com/zhangqianhui/wgan-gp-debug @martinarjovsky

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@igul222 @martinarjovsky hello , have you found the reason about the face generation for lower quality?

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

Hi! I haven't looked at the code yet. Can you run ishaan's code (the one on this repo) and see if it gives the same results?

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@martinarjovsky OK

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@martinarjovsky But , his code have not trained in celeba dataset , so Which architecture i need to use ? Is it Ok to use gan_64x64.py and dcgan's architecure?

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

That should be fine.

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

I test in this project, and it can generate very realistic face image after training in the celeba data-set.
But,I can't find reason that my code don't work the same as it.

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@igul222 Thanks , I have solved this problem.!

from improved_wgan_training.

martinarjovsky avatar martinarjovsky commented on July 2, 2024

Cool! What was the issue?

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@martinarjovsky igul222: Gan.py#61: It looks like self.images and self.fake_images both have shape [self.batch_size, 64, 64, self.channel]. In this case, alpha should have shape [self.batch_size, 1, 1, 1], and also reduction_indices on line 67 should be [1,2,3].

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

and my nextbatch() also have som problem.

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

And I think layer normalization is very important.

Thanks!
@igul222 @martinarjovsky

from improved_wgan_training.

timho102003 avatar timho102003 commented on July 2, 2024

Hi @zhangqianhui Im new to WGAN-GP
Im wondering that if we define G_Loss as g_loss = -D(Fake_Image) , we're expecting a loss to converge and minimize to a number in loss curve and a maximized loss curve while define G_Loss as g_loss = D(Fake_Image) ?
In C_Loss(Critic loss), if we define C_Loss as c_loss = D(Fake_Image)-D(Real_Image) does that mean we're expecting a loss to converge and minimize to a number and c_loss = D(Fake_Image)-D(Real_Image) is what you called "negative of critic loss"?

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

@timho102003

from improved_wgan_training.

zhangqianhui avatar zhangqianhui commented on July 2, 2024

Doing Classification after using training wgan?

from improved_wgan_training.

timho102003 avatar timho102003 commented on July 2, 2024

Actually I've done the multi-task on Discriminator to not only determine the real/fake problem but also classify Identities and other informations from face which has already come up with a good performance on the protocol such as MPIE. So far I'm trying on merging the WGAN to the implementation.
In my network, there's a average pooling(the output is Bx320x1x1)before a fc(in my previous version, fully connected layer do the multi-task work such as real/fake, Identity...). I take away the real/fake work from fully connected layer and use "view" function to reshape the dimension of average pooling(Bx320x1x1->BX320) and serve as the output of discriminator while calculating Wasserstein Loss.
From my previous experience, I feel like the model is training on the right direction. However the negative critic loss didn't start from large negative number and finally reach 0 thru training.

Generated Image (20epoch): Number of critic=5 while training, LR=0.0001
https://imgur.com/a/QVizP
Negative Critics Curve : - D(fake_image) + D(real_image):
https://imgur.com/a/jLn3q
D Wasserstein Loss: D(fake_image) - D(real_image) +Gradient Penalty
https://imgur.com/a/G5Wvl
G Wasserstein Loss: - D(fake_image)
https://imgur.com/a/F535n

from improved_wgan_training.

y601757692l avatar y601757692l commented on July 2, 2024

hi,now i am also trying to train on CelebA (cropped and resized to 64x64) in WGAN-GP mode . I just modify the DATA_DIR in gan_64x64.py. But there was a mistake like this:
IOError: [Errno 2] No such file or directory: '/data-4T-B/yelu/data/dcgan-completion.tensorflow/aligned/img_align_celeba_png/train_64x64/train_64x64/0927649.png'
Could you show me your code? thanks so much~~~

from improved_wgan_training.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.