Giter Club home page Giter Club logo

griddehazenet's Introduction

GridDehazeNet

This repo contains the official training and testing codes for our paper:

GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing

Xiaohong Liu*, Yongrui Ma*, Zhihao Shi, Jun Chen

* Equal contribution

Published on 2019 IEEE International Conference on Computer Vision (ICCV)

[Paper]


Prerequisites

  • Python >= 3.6
  • Pytorch >= 1.0
  • Torchvision >= 0.2.2
  • Pillow >= 5.1.0
  • Numpy >= 1.14.3
  • Scipy >= 1.1.0

Introduction

  • train.py and test.py are the entry codes for training and testing the GridDehazeNet.
  • train_data.py and val_data.py are used to load the training and validation/testing datasets.
  • model.py defines the model of GridDehazeNet, and residual_dense_block.py builds the RDB block.
  • perceptual.py defines the network for perceptual loss.
  • utils.py contains all corresponding utilities.
  • indoor_haze_best_3_6 and outdoor_haze_best_3_6 are the trained weights for indoor and outdoor in SOTS from RESIDE, where 3 and 6 stand for the network rows and columns (please read our paper for more details).
  • The ./trainning_log/indoor_log.txt and ./trainning_log/outdoor_log.txt record the logs.
  • The testing hazy images are saved in ./indoor_results/ or ./outdoor_results/ according to the image category.
  • The ./data/ folder stores the data for training and testing.

Quick Start

1. Testing

Clone this repo in environment that satisfies the prerequisites

$ git clone https://github.com/proteus1991/GridDehazeNet.git
$ cd GridDehazeNet

Run test.py using default hyper-parameter settings.

$ python3 test.py

If everything goes well, you will see the following messages shown in your bash

--- Hyper-parameters for testing ---
val_batch_size: 1
network_height: 3
network_width: 6
num_dense_layer: 4
growth_rate: 16
lambda_loss: 0.04
category: indoor
--- Testing starts! ---
val_psnr: 32.16, val_ssim: 0.9836
validation time is 113.5568

This is our testing results of SOTS indoor dataset. For SOTS outdoor dataset, run

$ python3 test.py -category outdoor

If you want to change the default settings (e.g. modifying the val_batch_size since you have multiple GPUs), simply run

$ python3 test.py -val_batch_size 2

It is exactly the same way to modify any other hyper-parameters as shown above. For more details about the meaning of each hyper-parameter, please run

$ python3 test.py -h

2. Training

To retrain or fine-tune the GridDehazeNet, first download the ITS (for indoor) and OTS (for outdoor) training datasets from RESIDE. Then, copy hazy and clear folders from downloaded ITS and OTS to ./data/train/indoor/ and ./data/train/outdoor/. Here we provide the indoor and outdoor training list in trainlist.txt for reproduction purpose. Also, we found some hazy images in training set are quite similar to the testing set (use the same ground-truth images but with different parameters to generate hazy images). For fairness, we carefully remove all of them from the training set and write the rest in trainlist.txt.

If you hope to use your own training dataset, please follow the same folder structure in ./data/train/. More details can be found in train_data.py.

After putting the training dataset into the correct path, we can train the GridDehazeNet by simply running train.py using default settings. Similar to the testing step, if there is no error raised, you will see the following messages shown in your bash

--- Hyper-parameters for training ---
learning_rate: 0.001
crop_size: [240, 240]
train_batch_size: 18
val_batch_size: 1
network_height: 3
network_width: 6
num_dense_layer: 4
growth_rate: 16
lambda_loss: 0.04
category: indoor
--- weight loaded ---
Total_params: 958051
old_val_psnr: 32.16, old_val_ssim: 0.9836
Learning rate sets to 0.001.
Epoch: 0, Iteration: 0
Epoch: 0, Iteration: 100
...

Follow the instruction in testing to modify the default settings.

Cite

If you use any part of this code, please kindly cite

@inproceedings{liuICCV2019GridDehazeNet,
    title={GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing},
    author={Liu, Xiaohong and Ma, Yongrui and Shi, Zhihao and Chen, Jun},
    booktitle={ICCV},
    year={2019}
}

griddehazenet's People

Contributors

proteus1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

griddehazenet's Issues

RuntimeError: storage has wrong size: expected 1152921504606846976 got 16

Hello, I am using your testing code but meet the following error:
'Traceback (most recent call last):
File "test.py", line 71, in
net.load_state_dict(torch.load('{}haze_best{}_{}'.format(category, network_height, network_width)))
File "/home/.local/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/.local/lib/python3.7/site-packages/torch/serialization.py", line 581, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: storage has wrong size: expected 1152921504606846976 got 16'
My pytorch version is 1.1.0 so I wonder if you met the same error? Thank you!

About reside-outdoor dataset

Hello! I would like to know how you choose to reside in your dataset or where the source is. Because I found that the number of data in the outdoor data set used in the relevant papers is different, there are 300,000, 290,000, and more than 70,000 mentioned in the paper that originally proposed to reside in the data set. Hope the author can explain the following, thank you very much!

How can I test my hazy image?

Hi,
I have some hazy images, I want to de-haze these pictures by your method, what should I do?
I am looking forward to your reply. Thanks a lot.

some issues about code

hello, I have some issues about code,
image

1.how do the parameter “output_size” work when the size of feature map is odd? by zero padding or other ways?
2. If zero padding is not added during training and zero padding needs to be added during testing, will there be performance degradation?

thanks

the effect is very different

Hello, I train the network on my server according to your training method, the effect is very different, PSNR = 17.74, SSIM = 0.7954. What is the reason for this

How to use the own image data?

I was impressive to see your great coding and tremendous results.
My question is how to use the our own image data to this model ?
I couldn't use them when the data are stored in the ./data/train/indoor/ and ./data/train/outdoor/.

Perceptual loss network

感谢你的分享,请我我可以在Perceptual loss network里将vgg16模型换成resnet?

Are you sure that the batchnorm distorts features in layers?

Recently, i have a question that dose the batchnorm truly skews feature information?

In your paper, you commented "Following [19, 15], we do not use batch normalization" . In the paper [19] and [15], however there is not the relevant comment or i failed to find it.

In the [19], the author commented like that "We remove the batch normalization layers from our network as Nah et al.[19] resented in their image deblurring work". (Note that, the Nah et al [19] is corresponding to [15] you referred in your paper)

So finally, i looked into the paper [15], i.e., Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring.
In this paper, the author said "We did not use batch normalization layers since we trained model with mini-batch of size
2, which is smaller than usual for batch normalization.".

They suggested that because the batch size is too small than the general used size, the model didn't use batch normalization.

I think. your and their view for why the batchnorm is improper for a model, were slightly different. Your default batch size is the number of 24. It is too big so that the batchnorm was necessary.

Why didn't use it? and did you perform the experiment with batchnorm?

The file described in the train/outdoor/trainlist.txt cannot be located in the RESIDE_OTS_BETA?

Thank you for your wonderful work !
However, I found the data file described in the train/outdoor/trainlist.txt cannot be located in the RESIDE_OTS_BETA. For example, the first item in the trainlist.txt is "0005_0.8_0.04.jpg" while the item with minmal index in RESIDE_OTS_BETA is "0025_0.8_0.04.jpg". The others don't match neither. Is it possible that you put the wrong text file here accidentally ?
I'm looking forward to your reply!

OTS数据集结果较差

您好,我按照论文中的步骤在OTS数据集上进行训练后得到的结果与论文中的偏差较大,请问您能解答下我的疑问吗?以下是我两次的参数设置及训练结果:
learning_rate: 0.001
crop_size: [240, 240]
train_batch_size: 18
val_batch_size: 1
network_height: 3
network_width: 6
num_dense_layer: 4
growth_rate: 16
lambda_loss: 0.04
category: outdoor

Date: 2024-03-03 15:36:53s, Time_Cost: 1117s, Epoch: [1/20], Train_PSNR: 26.60, Val_PSNR: 25.39, Val_SSIM: 0.9442
Date: 2024-03-03 15:55:27s, Time_Cost: 1114s, Epoch: [2/20], Train_PSNR: 27.94, Val_PSNR: 25.84, Val_SSIM: 0.9521
Date: 2024-03-03 16:13:53s, Time_Cost: 1106s, Epoch: [3/20], Train_PSNR: 28.63, Val_PSNR: 26.28, Val_SSIM: 0.9470
Date: 2024-03-03 16:32:27s, Time_Cost: 1113s, Epoch: [4/20], Train_PSNR: 28.81, Val_PSNR: 24.46, Val_SSIM: 0.9313
Date: 2024-03-03 16:50:59s, Time_Cost: 1112s, Epoch: [5/20], Train_PSNR: 29.18, Val_PSNR: 24.72, Val_SSIM: 0.9414
Date: 2024-03-03 17:09:37s, Time_Cost: 1119s, Epoch: [6/20], Train_PSNR: 29.26, Val_PSNR: 26.37, Val_SSIM: 0.9504
Date: 2024-03-03 17:28:18s, Time_Cost: 1120s, Epoch: [7/20], Train_PSNR: 29.48, Val_PSNR: 27.17, Val_SSIM: 0.9569
Date: 2024-03-03 17:47:10s, Time_Cost: 1132s, Epoch: [8/20], Train_PSNR: 29.53, Val_PSNR: 25.83, Val_SSIM: 0.9480
Date: 2024-03-03 18:06:02s, Time_Cost: 1132s, Epoch: [9/20], Train_PSNR: 29.64, Val_PSNR: 26.42, Val_SSIM: 0.9520
Date: 2024-03-03 18:24:54s, Time_Cost: 1131s, Epoch: [10/20], Train_PSNR: 29.68, Val_PSNR: 26.63, Val_SSIM: 0.9548
Date: 2024-03-03 18:43:46s, Time_Cost: 1133s, Epoch: [11/20], Train_PSNR: 29.75, Val_PSNR: 26.73, Val_SSIM: 0.9528
Date: 2024-03-03 19:02:38s, Time_Cost: 1132s, Epoch: [12/20], Train_PSNR: 29.78, Val_PSNR: 26.67, Val_SSIM: 0.9537
Date: 2024-03-03 19:21:29s, Time_Cost: 1131s, Epoch: [13/20], Train_PSNR: 29.82, Val_PSNR: 26.42, Val_SSIM: 0.9519
Date: 2024-03-03 19:40:19s, Time_Cost: 1130s, Epoch: [14/20], Train_PSNR: 29.82, Val_PSNR: 26.25, Val_SSIM: 0.9504
Date: 2024-03-03 19:59:10s, Time_Cost: 1131s, Epoch: [15/20], Train_PSNR: 29.84, Val_PSNR: 26.49, Val_SSIM: 0.9520
Date: 2024-03-03 20:18:01s, Time_Cost: 1131s, Epoch: [16/20], Train_PSNR: 29.85, Val_PSNR: 26.55, Val_SSIM: 0.9522
Date: 2024-03-03 20:36:51s, Time_Cost: 1130s, Epoch: [17/20], Train_PSNR: 29.87, Val_PSNR: 26.46, Val_SSIM: 0.9508
Date: 2024-03-03 20:55:40s, Time_Cost: 1129s, Epoch: [18/20], Train_PSNR: 29.86, Val_PSNR: 26.45, Val_SSIM: 0.9505
Date: 2024-03-03 21:14:31s, Time_Cost: 1130s, Epoch: [19/20], Train_PSNR: 29.87, Val_PSNR: 26.50, Val_SSIM: 0.9522
Date: 2024-03-03 21:33:14s, Time_Cost: 1123s, Epoch: [20/20], Train_PSNR: 29.87, Val_PSNR: 26.47, Val_SSIM: 0.9515
——————————————————————————————————————
learning_rate: 0.001
crop_size: [240, 240]
train_batch_size: 24
val_batch_size: 1
network_height: 3
network_width: 6
num_dense_layer: 4
growth_rate: 16
lambda_loss: 0.04
category: outdoor

Date: 2024-03-26 01:35:30s, Time_Cost: 1131s, Epoch: [1/30], Train_PSNR: 28.54, Val_PSNR: 26.42, Val_SSIM: 0.9527
Date: 2024-03-26 01:54:25s, Time_Cost: 1134s, Epoch: [2/30], Train_PSNR: 28.74, Val_PSNR: 25.35, Val_SSIM: 0.9459
Date: 2024-03-26 02:13:19s, Time_Cost: 1134s, Epoch: [3/30], Train_PSNR: 29.08, Val_PSNR: 25.98, Val_SSIM: 0.9465
Date: 2024-03-26 02:32:13s, Time_Cost: 1134s, Epoch: [4/30], Train_PSNR: 29.15, Val_PSNR: 26.25, Val_SSIM: 0.9530
Date: 2024-03-26 02:51:02s, Time_Cost: 1129s, Epoch: [5/30], Train_PSNR: 29.34, Val_PSNR: 26.18, Val_SSIM: 0.9505
Date: 2024-03-26 03:10:00s, Time_Cost: 1138s, Epoch: [6/30], Train_PSNR: 29.40, Val_PSNR: 25.71, Val_SSIM: 0.9478
Date: 2024-03-26 03:28:56s, Time_Cost: 1135s, Epoch: [7/30], Train_PSNR: 29.51, Val_PSNR: 25.80, Val_SSIM: 0.9452
Date: 2024-03-26 03:47:52s, Time_Cost: 1136s, Epoch: [8/30], Train_PSNR: 29.55, Val_PSNR: 25.72, Val_SSIM: 0.9495
Date: 2024-03-26 04:06:46s, Time_Cost: 1134s, Epoch: [9/30], Train_PSNR: 29.60, Val_PSNR: 26.04, Val_SSIM: 0.9497
Date: 2024-03-26 04:25:37s, Time_Cost: 1131s, Epoch: [10/30], Train_PSNR: 29.63, Val_PSNR: 25.97, Val_SSIM: 0.9502
Date: 2024-03-26 04:44:31s, Time_Cost: 1134s, Epoch: [11/30], Train_PSNR: 29.68, Val_PSNR: 25.94, Val_SSIM: 0.9474
Date: 2024-03-26 05:03:23s, Time_Cost: 1133s, Epoch: [12/30], Train_PSNR: 29.68, Val_PSNR: 25.63, Val_SSIM: 0.9461
Date: 2024-03-26 05:22:16s, Time_Cost: 1133s, Epoch: [13/30], Train_PSNR: 29.71, Val_PSNR: 26.05, Val_SSIM: 0.9473
Date: 2024-03-26 05:41:08s, Time_Cost: 1132s, Epoch: [14/30], Train_PSNR: 29.72, Val_PSNR: 26.05, Val_SSIM: 0.9490
Date: 2024-03-26 06:00:05s, Time_Cost: 1137s, Epoch: [15/30], Train_PSNR: 29.72, Val_PSNR: 25.90, Val_SSIM: 0.9481
Date: 2024-03-26 06:19:02s, Time_Cost: 1138s, Epoch: [16/30], Train_PSNR: 29.73, Val_PSNR: 26.08, Val_SSIM: 0.9492
Date: 2024-03-26 06:37:51s, Time_Cost: 1129s, Epoch: [17/30], Train_PSNR: 29.72, Val_PSNR: 25.88, Val_SSIM: 0.9470
Date: 2024-03-26 06:56:47s, Time_Cost: 1136s, Epoch: [18/30], Train_PSNR: 29.73, Val_PSNR: 26.03, Val_SSIM: 0.9488
Date: 2024-03-26 07:15:45s, Time_Cost: 1138s, Epoch: [19/30], Train_PSNR: 29.74, Val_PSNR: 25.96, Val_SSIM: 0.9482
Date: 2024-03-26 07:34:36s, Time_Cost: 1132s, Epoch: [20/30], Train_PSNR: 29.74, Val_PSNR: 25.93, Val_SSIM: 0.9482
Date: 2024-03-26 07:53:24s, Time_Cost: 1128s, Epoch: [21/30], Train_PSNR: 29.74, Val_PSNR: 26.02, Val_SSIM: 0.9486
Date: 2024-03-26 08:12:09s, Time_Cost: 1125s, Epoch: [22/30], Train_PSNR: 29.73, Val_PSNR: 25.96, Val_SSIM: 0.9481
Date: 2024-03-26 08:31:00s, Time_Cost: 1131s, Epoch: [23/30], Train_PSNR: 29.75, Val_PSNR: 26.04, Val_SSIM: 0.9487
Date: 2024-03-26 08:49:55s, Time_Cost: 1134s, Epoch: [24/30], Train_PSNR: 29.74, Val_PSNR: 26.02, Val_SSIM: 0.9487
Date: 2024-03-26 09:08:50s, Time_Cost: 1136s, Epoch: [25/30], Train_PSNR: 29.74, Val_PSNR: 26.02, Val_SSIM: 0.9486

it's not real !!!!!!

I use pictures from you pushed folder ,test by tensorflow api of psnr and ssim ,the psnr only is 24(Max value )

18TIP

您18年那篇多帧超分辨的文章代码公布了吗?可否参考一下,想引用您的文章,学长。

Normalizing differently?

Hi Thanks for the good work and for releasing the code. I found that in the data loaders, the hazy and clean images seem to be normalized differently. I am wondering if is there a particular reason for doing that.

trainlist.txt some errors

hi,我按照使用说明下载好ITS和OTS的数据,并将对应文件夹放入指定目录,但是在读取图像数据的时候提示找不到trainlist.txt中的图像文件,经查看,ITS数据和OTS数据应该已经更新了,所以文件名都对应不上,不知道数据集更新了之后,还必须使用trainlist.txt文件中的过滤后的数据来训练吗?谢谢

Outdoors dataset

I am sorry to ask that many images in the outdoors dataset are invalid. How did you do to solve this problem? By just removing the images from the traininglist? Thank you

some issue about upsampling and downsamplin

for_question

Hi, Nice to meet you.

I have a question why did you construct the up & down sampling block consisting of two parts.
To my understanding, Let in_channels = 10, kernel_size = 3, stride = 2
Case 1 (Yours)

self.conv1 = nn.Conv2d(in_channels, in_channels, kernel_size, stride=stride, padding=(kernel_size-1)//2)
self.conv2 = nn.Conv2d(in_channels, stride*in_channels, kernel_size, stride=1, padding=(kernel_size - 1) // 2)

Two parts for Downsampling are fused into one, Like this
Case 2

self.conv1 = nn.Conv2d(in_channels, stride*in_channels, kernel_size, stride= stride, padding=(kernel_size -1) // 2 )

The number of parameters of case 1, is 9 x 9 x 10 x 10 + 9 x 9 x 10 x 20
But case 2, 9 x 9 x 10 x 20
So, Using the method of case 2 seems reasonable.

Please reply
thank you.

I have some question when training by my own dataset

I'm sorry to bother you, but I want to use my own dataset for training and testing, but I keep reporting errors during training. I'd appreciate it if you could help me solve this problem.
wit@wit:/media/wit/Data/zkh/GridDehazeNet-master$ python3 train.py
--- Hyper-parameters for training ---
learning_rate: 0.001
crop_size: [240, 240]
train_batch_size: 18
val_batch_size: 1
network_height: 3
network_width: 6
num_dense_layer: 4
growth_rate: 16
lambda_loss: 0.04
category: indoor
--- no weight loaded ---
Total_params: 958051
Traceback (most recent call last):
File "train.py", line 116, in
old_val_psnr, old_val_ssim = validation(net, val_data_loader, device, category)
File "/media/wit/Data/zkh/GridDehazeNet-master/utils.py", line 51, in validation
for batch_id, val_data in enumerate(val_data_loader):
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/wit/.local/lib/python3.5/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/wit/.local/lib/python3.5/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/wit/Data/zkh/GridDehazeNet-master/val_data.py", line 44, in getitem
res = self.get_images(index)
File "/media/wit/Data/zkh/GridDehazeNet-master/val_data.py", line 33, in get_images
gt_img = Image.open(self.val_data_dir + 'clear/' + gt_name)
File "/home/wit/.local/lib/python3.5/site-packages/PIL/Image.py", line 2652, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: './data/test/SOTS/indoor/clear/31.png'

about the loss

In many image restoration works like super resolution, denoising and so on, when I train the network ,I found after 1or2epoch ,the loss keeps an approximately stable value, Is this normal?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.