Comments (4)
That is a good suggestion. We could really use faster training. We are more focused on providing the empirical validation for the MSG-GAN technique in this repo. Contributions are most welcome 😄.
Best regards,
@akanimax
from bmsg-gan.
Happy to contribute on this front, as I have a real interest in seeing this happen. Can you comment a bit more about what is and isn't parallelizable in the current architecture and the reasons for those decisions?
I think your notes outlined some particulars that are not currently parallelized.
from bmsg-gan.
Awesome. I'll need to take a deeper look into the code for that. It's night here actually. Will do that tomorrow. Thanks! 👍
Cheers 🍻!
@akanimax
from bmsg-gan.
Okay, so the notes about parallelism are regarding the use of the dataParallel technique over mulitple GPUs. this comment is saying that all the computations till the final block of the Discriminator will take place over multiple GPUs but will then bring all the results till then to the first GPU. This is done to handle the operation of the MinBatchStd which requires the whole batch of the data to evaluate the statistic. This could also be parallelized for multi-gpus using the lower level constructs, but for our use-case, the current solution seemed fine.
Basically, if you are running on a single GPU, there are no such issues. PyTorch GPU acceleration is automatically supported.
Hope this info helps.
Please let me know if you need any more info.
Best regards,
@akanimax
from bmsg-gan.
Related Issues (20)
- When you update this repo with styleGAN HOT 1
- What are the parameters for CIFAR10 dataset? I have been training with lr=0.003, betas=(0,0.99). Even after 5 epochs, I am not getting recognizable results at any resolution.
- pretrained models
- I want to stop and resume the training continuously. HOT 3
- Would you add compute IS and FID code ?? HOT 1
- Why output of torch.cat() is still C channels??
- WGAN-GP for Training MSG_ProGAN is failed HOT 3
- demo.py
- I got HOT 1
- Model collapse HOT 5
- Why smaller batch size results better results? With the epochs increased, the G loss went up? HOT 1
- when I train the model ? The d_loss is always 0.0 ? Is there some problems ? HOT 4
- WGAN-GP loss: averaging penalties or not?
- Can't load images | KeyError: 'SM_CHANNEL_TRAINING' HOT 2
- Error(s) in loading state_dict for DataParallel in generating_samples.py
- How to process the resolution of Flower dataset
- Question regarding higher resolution generation HOT 1
- Question about projecting generated images back to latent space
- MSG-cycleGAN HOT 3
- Msggan quality results
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bmsg-gan.