Giter Club home page Giter Club logo

gesco's People

Contributors

elegan23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gesco's Issues

Size mismatch for COCO checkpoint

Dear authors:
Thank you for this nice work.
When I try to load the checkpoint you provided for image sampling of COCO dataset, I got this error. It seems that number of classes is 150 rather than 183.
RuntimeError: Error(s) in loading state_dict for UNetModel:
size mismatch for middle_block.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
size mismatch for middle_block.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
size mismatch for middle_block.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
size mismatch for middle_block.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
size mismatch for output_blocks.0.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
size mismatch for output_blocks.0.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 151, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 184, 3, 3]).
...

It seems there's an error with the requirements.txt

How to solve this:
"The conflict is caused by:
The user requested numpy==1.24.4
contourpy 1.1.0 depends on numpy>=1.16
matplotlib 3.7.2 depends on numpy>=1.20
mkl-fft 1.3.6 depends on numpy<1.25.0 and >=1.24.3
mkl-random 1.2.2 depends on numpy<1.23.0 and >=1.22.3"
Maybe this is due to the version the python I'm using (3.8)? Could you give the version of the python of the requirements.txt?

ImportError: cannot import name 'logger' from 'guided_diffusion'

Hi, I’m trying to run the guided_diffusion script, but I get this error:
ImportError: cannot import name 'logger' from 'guided_diffusion'
I have installed the guided_diffusion package with pip, and I’m using Python 3.8. How can I solve this problem? Thank you for your help.

Report an error about“ wandb.init(project="sdm") ”

report an error about“ wandb.init(project="sdm") ”

Traceback (most recent call last):
File "D:\Anaconda\envs\pytorch\lib\site-packages\wandb\sdk\wandb_init.py", line 1173, in init
raise e
File "D:\Anaconda\envs\pytorch\lib\site-packages\wandb\sdk\wandb_init.py", line 1150, in init
wi.setup(kwargs)
File "D:\Anaconda\envs\pytorch\lib\site-packages\wandb\sdk\wandb_init.py", line 289, in setup
wandb_login._login(
File "D:\Anaconda\envs\pytorch\lib\site-packages\wandb\sdk\wandb_login.py", line 298, in _login
wlogin.prompt_api_key()
File "D:\Anaconda\envs\pytorch\lib\site-packages\wandb\sdk\wandb_login.py", line 228, in prompt_api_key
raise UsageError("api_key not configured (no-tty). call " + directive)
wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])

ModuleNotFoundError for unet_edited in guided_diffusion/script_util.py

Environment:
Operating System: macOS
Steps to Reproduce:
Clone the GESCO-main repository.
Navigate to the repository directory.
Run the command python image_train.py --data_dir ./data --dataset_mode cityscapes... (other flags)
Expected Behavior:
The script should run without any issues.

Actual Behavior:
Received a ModuleNotFoundError for guided_diffusion.unet_edited.

Traceback (most recent call last):
File "/Users/wweng/Desktop/GESCO-main/image_train.py", line 11, in
from guided_diffusion.script_util import (
File "/Users/wweng/Desktop/GESCO-main/guided_diffusion/script_util.py", line 7, in
from .unet_edited import UNetModelSPADE, UNetModelEncOnly
ModuleNotFoundError: No module named 'guided_diffusion.unet_edited'

Additional Information:
Upon inspecting the guided_diffusion directory, the following modules were found:

init.py
fp16_util.py
gaussian_diffusion.py
image_datasets.py
losses.py
nn.py
resample.py
respace.py
script_util.py
train_util.py
unet.py
The module unet_edited is not present, causing the ModuleNotFoundError.

Encountered "TypeError: 'ABCMeta' object is not subscriptable" Error

Hello,
I encountered an issue while trying to use your library. When I run the following code:
TypeError: 'ABCMeta' object is not subscriptable
I've searched for related issues but haven't found a solution yet. Could you provide some guidance or a fix?
Thank you for your assistance!

size mismatch for my own pretrained model with existing model

Dear authors,
I hope this message finds you well. Following the dataset processing method provided by you previously, I conducted data processing and proceeded with training using 'image_train.py.' However, when I attempted to test the trained model using 'image_sample.py,' I encountered the following error. It is worth noting that I followed the training settings outlined in your paper and GitHub repository, and testing with the pre-trained model provided by you in 'image_sample.py' was successful.The error persists despite my adherence to the guidelines mentioned. I would greatly appreciate it if you could shed some light on the possible reasons for this discrepancy. Your insights and guidance on this matter would be immensely valuable to me.

Thank you in advance for your attention and support. I look forward to hearing from you soon.

error:
Traceback (most recent call last):
File "/newdata/xws/GESCO-main/image_sample.py", line 414, in
main()
File "/newdata/xws/GESCO-main/image_sample.py", line 53, in main
model.load_state_dict(th.load(args.model_path, map_location='cuda:0'))
File "/root/anaconda3/envs/pytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNetModel:
size mismatch for middle_block.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for middle_block.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.0.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.0.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.1.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.1.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.2.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.3.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.3.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.4.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.4.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.5.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.6.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.6.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.7.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.7.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.2.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.8.2.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.9.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.9.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.10.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.10.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.1.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.11.1.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.12.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.12.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.13.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.13.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.1.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.14.1.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.15.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.15.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.16.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.16.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.17.0.in_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).
size mismatch for output_blocks.17.0.out_norm.mlp_shared.0.weight: copying a param with shape torch.Size([128, 2, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 36, 3, 3]).

TypeError: unsupported operand type(s) for %: 'NoneType' and 'int'

Traceback (most recent call last):
File "F:\pycharm\p1\GESCO_main\image_train.py", line 113, in
main()
File "F:\pycharm\p1\GESCO_main\image_train.py", line 30, in main
model, diffusion = create_model_and_diffusion(
File "F:\pycharm\p1\GESCO_main\guided_diffusion\script_util.py", line 108, in create_model_and_diffusion
model = create_model(
File "F:\pycharm\p1\GESCO_main\guided_diffusion\script_util.py", line 198, in create_model
return UNetModel(
File "F:\pycharm\p1\GESCO_main\guided_diffusion\unet.py", line 709, in init
SDMResBlock(
File "F:\pycharm\p1\GESCO_main\guided_diffusion\unet.py", line 348, in init
self.in_norm = SPADEGroupNorm(channels, c_channels)
File "F:\pycharm\p1\GESCO_main\guided_diffusion\unet.py", line 165, in init
nn.Conv2d(label_nc, nhidden, kernel_size=(3,), padding=1),
File "F:\anoconda\envs\pytorch\lib\site-packages\torch\nn\modules\conv.py", line 450, in init
super().init(
File "F:\anoconda\envs\pytorch\lib\site-packages\torch\nn\modules\conv.py", line 89, in init
if in_channels % groups != 0:

`dataset_processing.py` file

Dear authors,
I hope this message finds you well. I have been using your source code 'image_train.py' to train a model on my own, but I am encountering an issue where the model file I trained does not align with the number of classes, making it impossible to perform testing on the provided test files 'image_sample.py'.

In light of this, I wanted to inquire if it would be possible for you to provide a dataset_processing.py file specifically designed for handling images and labels processing for the Cityscapes and COCO datasets. By having this file, I can ensure consistency in the dataset during both the training and testing phases, enabling me to accurately evaluate the model's performance.

Thank you very much for your time and assistance. I greatly appreciate any advice or guidance you can provide.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.