jessemelpolio / non-stationary_texture_syn Goto Github PK
View Code? Open in Web Editor NEWCode used for texture synthesis using GAN
Home Page: http://vcc.szu.edu.cn/research/2018/TexSyn
License: MIT License
Code used for texture synthesis using GAN
Home Page: http://vcc.szu.edu.cn/research/2018/TexSyn
License: MIT License
Hi,
In your paper you mention diversity augmentation by giving shuffled tiles of the original image to the generator ; have you implemented this anywhere in github?
Thank you.
Hi, I have a question that is not directly related to your algorithm. It's about the experiments in your paper.
I notice you compared your result with Self-tuning and MGANs in figure 7. As far as I know, the code for MGANs can not be used for texture synthesis, although they have these results in their paper, and there are some bugs in the code of self-tuning methods. How do you run their codes?
sorry
Lines of codes are changed to work on pytorch 3.9.0 and cuda 11.3 with python 3.7:
https://github.com/Alhasan-Abdellatif/non-stationary_texture_syn
I think this dataset for train is a bit small. Do you think so?
@jessemelpolio Hello, I found that scripts/train_half_style.sh
only takes one image (i.e. datasets/half/202/train/202.jpg
) as your training data. In the testing phase, this trained model can only take the same image (i.e. dataset/half/202/test/202.jpg
) as input and output a bigger one. What I want to know is whether this already trained model is only valid for this picture. If I want to test other image, does the model need to be retrained?I think this is a bit like ZSSR.
Hi,
I'm running the code using Linux with a Nvidia P100 card (16GB memory) and Pytorch 0.4. It's okay to train small images that's around the size 256x256 to generate 512x512. However, when I train using an image that's 500x500, it threw me an out-of-memory error below during validation stage. I think 16GB should be sufficient for this image size, do you happen to have any idea why this happens?
Traceback (most recent call last):
File "train.py", line 66, in
test_func(opt, webpage, epoch=str(epoch))
File "/home/yuzhang/non-stationary_texture_syn/test_function.py", line 48, in test_func
model.test()
File "/home/yuzhang/non-stationary_texture_syn/models/test_model.py", line 38, in test
self.fake_B = self.netG.forward(self.real_A)
File "/home/yuzhang/non-stationary_texture_syn/models/networks.py", line 221, in forward
return self.model(input)
File "/home/yuzhang/anaconda3/envs/pytorch_3.5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/yuzhang/anaconda3/envs/pytorch_3.5/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/yuzhang/anaconda3/envs/pytorch_3.5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/yuzhang/anaconda3/envs/pytorch_3.5/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory
I'm training the model on the following options:
python train.py --dataroot ./datasets/half/1 --name 1_half_style_14x14 --use_style --no_lsgan --padding_type replicate --model half_style --which_model_netG resnet_2x_6blocks --which_model_netD n_layers --n_layers_D 4 --which_direction AtoB --lambda_A 100 --dataset_mode half_crop --norm batch --pool_size 0 --resize_or_crop no --niter_decay 50000 --niter 50000 --save_epoch_freq 2000 --display_id 0 --gpu_ids 0
Once the model finishes training, it seems that the image is expanded as it just doubles the resolution, without adding patterns to the surroundings as it seems on your paper.
I've tried with images without patterns and images with repetitive macro patterns.
Hi, this is more of a question, than an issue.
In the definition backward_G, we take gram_matrix of the real_B, but not of the targets. So to calculate loss we compare gram_matrix of source to simple (not gram_matrix) of targets. Any reason for that?
Hi,
I was trying to download you pretrained models via the download_pretrained_models.sh
script, but the resource (as well as the entire domain) seems unavailable. Is it possible to download the models using another source?
Than you in advance for the answer and for the contribution!
Is there any difference between (style_loss+perceptual_loss+gan_G).backward() and style_loss.backward(retain_graph=True) + (perceptual_loss+gan_G).backward() ?
thanks
Hi,
I'm facing this problem when trying to run "train.py". It asks me to put the dataroot parameter but whatever the directory it always gives me this problem because it doesn't find the "A" and "B" folders.
What am I missing?
Hi, I really appreciate your code, but it for some unknown reason, the complier complains about the connection problem all the time.
Traceback (most recent call last):
File "D:\Anaconda\envs\Py35\lib\site-packages\urllib3\connection.py", line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "D:\Anaconda\envs\Py35\lib\site-packages\urllib3\util\connection.py", line 79, in create_connection
raise err
File "D:\Anaconda\envs\Py35\lib\site-packages\urllib3\util\connection.py", line 69, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
This error shows up at 13th line at train.py:
dataset = data_loader.load_data()
It's really strange, because I think this program has nothing to do with network connections, isn't it?
Good job! It shocks me that GAN can generate so clear and vivid textures. Could you please release the trained model?
bethgelab is refusing the connection and I am not able to download the pre trained model. Can you please provide the file or make the link working? Or please suggest any other workaround.
Thanks a lot
Hi, I really appreciate your model, it gives amazing results!
However, I would like to know how can we visualize the intermediate result of the residual blocks? Each image in figure 6 is just 1 random channel of the 256 channels?
Do you think it would be possible to train this GAN. And use the discrimator to detect the same surface with defects ?
Sry to use issues ;)
It seems that there are only pre-trained models in “download_pretrained_models.sh”. Can you share your trained models?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.