Giter Club home page Giter Club logo

gcanet's People

Contributors

cddlyf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gcanet's Issues

issues about training

Hi! I used the training code you provided with RESIDE ITS training dataset and SOTS indoor testing dataset. I have trained 140 epochs, and the average loss is close to 12, PSNR is close to 28. Do you have more experimental details ?

issue about start train

When I try to run train.py, pop up a error, I don't know how to fix, plz help me
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "train.py", line 142, in
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 106, in spawn_main
for iter, data in enumerate(train_dataloader):
File "C:\Users\User\anaconda3\envs\dehaze\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter
exitcode = _main(fd)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 115, in _main
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\User\anaconda3\envs\dehaze\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init
prepare(preparation_data)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 226, in prepare
_fixup_main_from_path(data['init_main_from_path'])
w.start()
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\process.py", line 105, in start
run_name="mp_main")
File "C:\Users\User\anaconda3\envs\dehaze\lib\runpy.py", line 263, in run_path
self._popen = self._Popen(self)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\context.py", line 212, in _Popen
pkg_name=pkg_name, script_name=fname)
File "C:\Users\User\anaconda3\envs\dehaze\lib\runpy.py", line 96, in _run_module_code
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\context.py", line 313, in _Popen
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\User\anaconda3\envs\dehaze\lib\runpy.py", line 85, in _run_code
return Popen(process_obj)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
exec(code, run_globals)
File "D:\project(GCANe)t-master(OK)\GCANet_train\train.py", line 142, in
for iter, data in enumerate(train_dataloader):
File "C:\Users\User\anaconda3\envs\dehaze\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter
reduction.dump(process_obj, to_child)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\reduction.py", line 59, in dump
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\User\anaconda3\envs\dehaze\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
w.start()
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\popen_spawn_win32.py", line 34, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "C:\Users\User\anaconda3\envs\dehaze\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable. 

> > @yxxxxxxxx @zrhyst23原代码是为云机写的。我已经为本地机器重新编写了它,但现在没有时间测试它。如果你们中的任何人有时间帮助我,请给我发电子邮件,我会将培训代码发送给你们,让你们先尝试。

@yxxxxxxxx @zrhyst23原代码是为云机写的。我已经为本地机器重新编写了它,但现在没有时间测试它。如果你们中的任何人有时间帮助我,请给我发电子邮件,我会将培训代码发送给你们,让你们先尝试。

好的,非常感谢!我可以试试,你可以发邮件到[email protected]
请问方便发一下代码吗? 我的邮箱是[email protected] 感谢感谢

Originally posted by @zhoumo1121 in #7 (comment)

Clarification on the Dataset

Hi,
I was going through the paper and the test outputs.
I am very much interested in the light model and its short inference time.
I would like to do more experiments with this solution.

While going through the paper, I found the the dataset used is RESIDE
I referred to the link RESIDE to download the data set.

Could you please share me some light on which dataset really I have to consider?

  1. Which of the ITS/OTS/SOTS dataset used for the pretrained model training?
  2. Does the pretrained model trained with indoor/outdoor with equal proportion?

Regards,
Albin

Invoke training model

Hello, may I ask how to call my own training model for testing after I finish training with training code?

issues about retraining

Hi, I try to re-training the GCANet using the same parameters and same dataset (for dehazing task) described in your paper, but i can't get the same results, There are must something wrong with my training script and data pre-process.

so could you provide your training script? Thanks!

issues about SOTS dataset?

hello,the size of gt is not same with the size of hazy image,can I resize hazy image before test,or resize dehaze image after test,or both resize gt and hazy image?
I resize dehaze image after test the SOTS indoor datset ,(620460->640480),but I only get psnr about 14, how to solve the problem.
this is the psnr matlab code I use:
https://paste.ubuntu.com/p/4s7mpv5MCG/

wacv_gcanet_dehaze.pth

Hello author, the weight given by you in GCANet / models / wacv_gcanet_dehaze.pth, running demo.py can achieve a good dehazing effect. When using this weight to test on the RESIDE standard dataset (SOTS) indoor dataset, why do I get low PSNR and SSIM values?

关于数据集传入参数

image
对于这几个参数的传入有点迷茫,目前我下载的outdoor_beta中有haze、depth、clear这三个文件夹,所以不知道是传哪个?能方便告知嘛,万分感谢百忙之中回复。

Something about test set

Hi author, does your test dataset use the RESIDE standard dataset (SOTS indoor dataset)? I hope to get your answer as soon as possible, thank you.

About Training Process

Hello! I want to conduct on my own dataset so I wonder if you can provide your training code. Thank you very much!

issues about

hello ,I use your code to test you pictures 0099 in example folds,but I do not get psnr 30,I only get 24. (when test I resize the dehaze image to gt image‘s size ),I use matlab code: https://paste.ubuntu.com/p/QDKdKGXxy8/ ,can you share the psnr code,thanks

something aboat training set

I tested the RESIDE with the pre-trained model you gave and found that the outdoor image is not as good as the indoor image, so I want to know if the training datasets you use contains outdoor images.I hope to get your answer as soon as possible, thank you.

Dataset

你好,训练过程中所需要的数据集能分享一下嘛?万分的感谢

test dataset

Hi, thank you very much for your code, I am already learning it. However, this error was reported when running train.py.

2f2219ed3ee30ae297e45e20c75c0e3

Say that the input size of the test data does not match the size of the label. I follow the test dataset RESIDE-Standard SOTS indoor dataset mentioned in your paper.

55dcd77cd53a2d455dc6aa3f9fd79da

I found that the size of the hazy in this SOTS indoor dataset is 620x460, but the gt is 640x480. Have you encountered this problem? How did you solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.