Comments (17)
Hi, I'm running gan_laguage.py, after changing all the NCHW to the CPU compatible NHWC, i still get problem as follows:
ValueError: Dimensions must be equal, but are 32 and 512 for 'Generator.1.1/conv1d/Conv2D' (op: 'Conv2D') with input shapes: [64,1,512,32], [1,5,512,512].
Can anyone tell me how to fix the problem? Thank you very much!
from improved_wgan_training.
I need to run the MNIST experiments on this repo and can't do it on my GPU because it's busy running some other experiments.
It might also be that people without GPUs want to run the toy example or MNIST...
from improved_wgan_training.
Thanks for creating this repo. I agree with rafaelvalle - it seems unreasonable that the code should break entirely if not using GPU...
from improved_wgan_training.
Totally agree. This is extremely helpful for me who can't afford a GPU and want to run this code as a toy example...
from improved_wgan_training.
In order to run MNIST experiments on CPU in wgan-gp mode, beside performing the changes suggested by @rafaelvalle , it is necessary to also perform the following changes:
- conv2d.py from strides = [1, 1, stride, stride] to strides = [1, stride, stride, 1]
- gan_mnist.py the output tensor in Discriminator should be changed from output = tf.reshape(inputs, [-1, 1, 28, 28]) to output = tf.reshape(inputs, [-1, 28, 28, 1])
from improved_wgan_training.
However, in order to make this code run through in CPU mode, I have to make extra changes apart from the changes mentioned by rafaelvalle.
For example, in conv1d.py, I have to change result = tf.expand_dims(result, 3)
to result = tf.expand_dims(result, **1**)
since the data format is changed from 'NCHW' to 'NHWC'.
As I am pretty new to DL and TF, I am struggling to understand the data structure, and the changes I made to make this code run through are pretty ugly. Anyway, I am asking my boss to buy a GPU so that I can get rid of these headaches...
from improved_wgan_training.
Can be fixed by changing below from NCHW to the CPU compatible NHWC.
./tflib/ops/batchnorm.py: return tf.nn.fused_batch_norm(inputs, scale, offset, epsilon=1e-5, data_format='NCHW')
./tflib/ops/batchnorm.py: # data_format='NCHW'
Binary file ./tflib/ops/batchnorm.pyc matches
./tflib/ops/conv1d.py: data_format='NCHW'
./tflib/ops/conv1d.py: result = tf.nn.bias_add(result, _biases, data_format='NCHW')
./tflib/ops/conv2d.py: data_format='NCHW'
./tflib/ops/conv2d.py: result = tf.nn.bias_add(result, _biases, data_format='NCHW')
Binary file ./tflib/ops/conv2d.pyc matches
./tflib/ops/deconv2d.py: inputs = tf.transpose(inputs, [0,2,3,1], name='NCHW_to_NHWC')
./tflib/ops/deconv2d.py: result = tf.transpose(result, [0,3,1,2], name='NHWC_to_NCHW')
Binary file ./tflib/ops/deconv2d.pyc matches
from improved_wgan_training.
It seems like you're trying to run the model without a GPU. Is there any reason you want to do this? I think it would be too slow to be practical on a CPU.
from improved_wgan_training.
@georgiazhang I am also facing a similar problem.
@igul222 , how can we mitigate this while running on CPU?
from improved_wgan_training.
@rafaelvalle , how long does it take for each iteration in case of MNIST training on CPU? For me it is taking about 12s. Is it too high for one mini-batch?
from improved_wgan_training.
@georgiazhang I am facing the same problem. Do you have fixed this issue?
from improved_wgan_training.
@kaiyu-tang
@MLEnthusiast
@georgiazhang
I am facing the same problem. Do you have fixed this issue?
from improved_wgan_training.
@kaiyu-tang
@MLEnthusiast
@georgiazhang
@Samt7
I am facing the same problem. Do you have fixed this issue?
from improved_wgan_training.
@kaiyu-tang
@MLEnthusiast
@georgiazhang
@Samt7
@nickyoungforu
I am facing the same problem. Do you have fixed this issue?
from improved_wgan_training.
@1213999170
i modified tensors shape after changed data_format=NHWC:
./tflib/ops/conv1d.py: 104 add result = tf.transpose(result, [0, 1, 3, 2])
./gan_language.py: 71 # output = tf.transpose(output, [0, 2, 1])
./gan_language.py: 76 # output = tf.transpose(inputs, [0,2,1])
from improved_wgan_training.
Conv2DCustomBackpropInputOp only supports NHWC.
from improved_wgan_training.
I want to know how to solve this problems ,can you tell me ?
from improved_wgan_training.
Related Issues (20)
- o._shape = TensorShape(new_shape) caused error in inception_score.py HOT 1
- Why the gradient penalty item decreases to zero and then grows to infinity ?
- This code is outdated seriously HOT 3
- inception_score.py: fixed the issue of ValueError "Cannot iterate over a shape with unknown rank"
- inception_score.py: ValueError in the method _init_inception() HOT 1
- Could it be possible to make the trained GAN publicly available?
- Mismatch between code and paper in the gradient penalty algorithm HOT 1
- Questions about the loss
- AttributeError: module '_pickle' has no attribute 'HIGHEST_PROTOCOL' HOT 1
- Error Conv2DCustomBackpropFilterOp only supports NHWC HOT 2
- Question of DEVICE in the gan_cifar10_resnet.py
- how to run it?
- Critic loss curve
- a question about loss
- reproducing inception score on gan_cifar.py HOT 2
- If I intend to calculate gradient penalty for two dataset in differet dimension, what should I do?
- gan_mnist.py's ERROR HOT 1
- Query: WGAN-GP FID SCORE (PyTorch)
- Wire gide
- Conv2DCustomBackpropInputOp only supports NHWC
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from improved_wgan_training.