Giter Club home page Giter Club logo

cf-net's Introduction

CF-Net : Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution

  • This is the official repository of the paper "Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution" from IEEE Transactions on Image Processing 2021. [Paper Link][PDF Link]
  • We have conducted a live streaming on Extreme Mart Platform, the Powerpoint file can be downloaded from [PPT Link].

framework

1. Environment

  • Python >= 3.5
  • PyTorch >= 0.4.1 is recommended
  • opencv-python
  • pytorch-msssim
  • tqdm
  • Matlab

2. Dataset

The training data and testing data is from the [SICE dataset]. Or you can download the datasets from our [Google Drive Link].

3. Test

  1. Clone this repository:
    git clone https://github.com/ytZhang99/CF-Net.git
    
  2. Place the low-resolution over-exposed images and under-exposed images in dataset/test_data/lr_over and dataset/test_data/lr_under, respectively.
    dataset 
    └── test_data
        ├── lr_over
        └── lr_under
    
  3. Run the following command for 2 or 4 times SR and exposure fusion:
    python main.py --test_only --scale 2 --model model_x2.pth
    python main.py --test_only --scale 4 --model model_x4.pth
    
  4. Finally, you can find the Super-resolved and Fused results in ./test_results.

4. Training

Preparing training and validation data

  1. Place HR_groundtruth, HR_over_exposed, HR_under_exposed images for training in the following directory, respectively. (Optional) Validation data can also be placed in dataset/val_data.
    dataset 
    ├── train_data
    |   ├── hr
    |   ├── hr_over
    |   └── hr_under
    └── val_data
        ├── gt
        ├── lr_over
        └── lr_under
    
  2. Open Prepare_Data_HR_LR.m file and modify the following lines according to your training commands.
    Line 5 or 6 : scale = 2 or 4
    Line 9 : whether use off-line data augmentation (default = True)
    [Line 12 <-> Line 17] or [Line 13 <-> Line 18] : producing [lr_over/lr_under] images from [hr_over/hr_under] images
    
  3. After the above operations, dataset/train_data should be as follows:
    dataset
    └── train_data 
        ├── hr
        ├── hr_over
        ├── hr_under
        ├── lr_over
        └── lr_under
    

Training

  1. Place the attached files dataset.py and train.py in the same directory with main.py.
  2. Run the following command to train the network for scale=2 or 4 according to the training data.
    python main.py --scale 2 --model my_model
    python main.py --scale 4 --model my_model
    
    If validation data is added, run the following command to get the best model best_ep.pth.
    python main.py --scale 2 --model my_model -v
    python main.py --scale 4 --model my_model -v
    
  3. The trained model are placed in the directory ./model/.

5. Citation

If you find our work useful in your research or publication, please cite our work:

@article{deng2021deep,
  title={Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution.},
  author={Deng, Xin and Zhang, Yutong and Xu, Mai and Gu, Shuhang and Duan, Yiping},
  journal={IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society},
  year={2021}
}

6. Contact

If you have any question about our work or code, please email [email protected] .

cf-net's People

Contributors

ytzhang99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cf-net's Issues

mistakes

Please check the model.py and option.py version is same with .pth model
Numerous state_dict names and dimensions is not match!

-------------------------------------------- Wrong message ------------------------------------------------
RuntimeError: Error(s) in loading state_dict for CFNet:
size mismatch for srb_1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_over.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for srb_2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_under.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_1.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for out_2.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over0.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under0.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under1.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_over2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.upBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.0.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.1.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.2.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.3.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.4.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).
size mismatch for cfb_under2.downBlocks.5.0.weight: copying a param with shape torch.Size([64, 64, 8, 8]) from checkpoint, the shape in current model is torch.Size([64, 64, 6, 6]).

quantitative performance

Hi, Zhang!
Excellent work. But I wonder what the function to calculate PSNR and SSIM. The result obtained by the function in test.py seems not to match the score in Table V (e.g. 4x, the re-evaluated result is about 20.5db)

RuntimeError

RuntimeError: stack expects each tensor to be equal size, but got [3, 128, 128] at entry 0 and [3, 28, 128] at entry 1
I have processed the data as requested. So is there any other needed process?

File "D:\CF-Net\CF-Net-master\CF-Net-master\train.py", line 57, in train
for l_over, l_under, h_over, h_under, h in self.train_loader:
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\torch\lib\site-packages\torch\utils\data_utils\collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 128, 128] at entry 0 and [3, 28, 128] at entry 1

Supplied model is not match the code

Please check the model.py and option.py version is same with .pth model
Numerous state_dict names and dimensions is not match!

-------------------------------------------- Wrong message ------------------------------------------------
RuntimeError: Error(s) in loading state_dict for CFNet:
Missing key(s) in state_dict: "srb_1.compress_in.0.weight", "srb_1.compress_in.0.bias", "srb_1.compress_in.1.weight", "srb_1.upBlocks.0.0.weight", "srb_1.upBlocks.0.0.bias", "srb_1.upBlocks.0.1.weight", "srb_1.upBlocks.1.0.weight", "srb_1.upBlocks.1.0.bias", "srb_1.upBlocks.1.1.weight", "srb_1.upBlocks.2.0.weight", "srb_1.upBlocks.2.0.bias", "srb_1.upBlocks.2.1.weight", "srb_1.upBlocks.3.0.weight", "srb_1.upBlocks.3.0.bias", "srb_1.upBlocks.3.1.weight", "srb_1.upBlocks.4.0.weight", "srb_1.upBlocks.4.0.bias", "srb_1.upBlocks.4.1.weight", "srb_1.upBlocks.5.0.weight", "srb_1.upBlocks.5.0.bias", "srb_1.upBlocks.5.1.weight", "srb_1.downBlocks.0.0.weight", "srb_1.downBlocks.0.0.bias", "srb_1.downBlocks.0.1.weight", "srb_1.downBlocks.1.0.weight", "srb_1.downBlocks.1.0.bias", "srb_1.downBlocks.1.1.weight", "srb_1.downBlocks.2.0.weight", "srb_1.downBlocks.2.0.bias", "srb_1.downBlocks.2.1.weight", "srb_1.downBlocks.3.0.weight", "srb_1.downBlocks.3.0.bias", "srb_1.downBlocks.3.1.weight", "srb_1.downBlocks.4.0.weight", "srb_1.downBlocks.4.0.bias", "srb_1.downBlocks.4.1.weight", "srb_1.downBlocks.5.0.weight", "srb_1.downBlocks.5.0.bias", "srb_1.downBlocks.5.1.weight", "srb_1.uptranBlocks.0.0.weight", "srb_1.uptranBlocks.0.0.bias", "srb_1.uptranBlocks.0.1.weight", "srb_1.uptranBlocks.1.0.weight", "srb_1.uptranBlocks.1.0.bias", "srb_1.uptranBlocks.1.1.weight", "srb_1.uptranBlocks.2.0.weight", "srb_1.uptranBlocks.2.0.bias", "srb_1.uptranBlocks.2.1.weight", "srb_1.uptranBlocks.3.0.weight", "srb_1.uptranBlocks.3.0.bias", "srb_1.uptranBlocks.3.1.weight", "srb_1.uptranBlocks.4.0.weight", "srb_1.uptranBlocks.4.0.bias", "srb_1.uptranBlocks.4.1.weight", "srb_1.downtranBlocks.0.0.weight", "srb_1.downtranBlocks.0.0.bias", "srb_1.downtranBlocks.0.1.weight", "srb_1.downtranBlocks.1.0.weight", "srb_1.downtranBlocks.1.0.bias", "srb_1.downtranBlocks.1.1.weight", "srb_1.downtranBlocks.2.0.weight", "srb_1.downtranBlocks.2.0.bias", "srb_1.downtranBlocks.2.1.weight", "srb_1.downtranBlocks.3.0.weight", "srb_1.downtranBlocks.3.0.bias", "srb_1.downtranBlocks.3.1.weight", "srb_1.downtranBlocks.4.0.weight", "srb_1.downtranBlocks.4.0.bias", "srb_1.downtranBlocks.4.1.weight", "srb_1.compress_out.0.weight", "srb_1.compress_out.0.bias", "srb_1.compress_out.1.weight", "srb_2.compress_in.0.weight", "srb_2.compress_in.0.bias", "srb_2.compress_in.1.weight", "srb_2.upBlocks.0.0.weight", "srb_2.upBlocks.0.0.bias", "srb_2.upBlocks.0.1.weight", "srb_2.upBlocks.1.0.weight", "srb_2.upBlocks.1.0.bias", "srb_2.upBlocks.1.1.weight", "srb_2.upBlocks.2.0.weight", "srb_2.upBlocks.2.0.bias", "srb_2.upBlocks.2.1.weight", "srb_2.upBlocks.3.0.weight", "srb_2.upBlocks.3.0.bias", "srb_2.upBlocks.3.1.weight", "srb_2.upBlocks.4.0.weight", "srb_2.upBlocks.4.0.bias", "srb_2.upBlocks.4.1.weight", "srb_2.upBlocks.5.0.weight", "srb_2.upBlocks.5.0.bias", "srb_2.upBlocks.5.1.weight", "srb_2.downBlocks.0.0.weight", "srb_2.downBlocks.0.0.bias", "srb_2.downBlocks.0.1.weight", "srb_2.downBlocks.1.0.weight", "srb_2.downBlocks.1.0.bias", "srb_2.downBlocks.1.1.weight", "srb_2.downBlocks.2.0.weight", "srb_2.downBlocks.2.0.bias", "srb_2.downBlocks.2.1.weight", "srb_2.downBlocks.3.0.weight", "srb_2.downBlocks.3.0.bias", "srb_2.downBlocks.3.1.weight", "srb_2.downBlocks.4.0.weight", "srb_2.downBlocks.4.0.bias", "srb_2.downBlocks.4.1.weight", "srb_2.downBlocks.5.0.weight", "srb_2.downBlocks.5.0.bias", "srb_2.downBlocks.5.1.weight", "srb_2.uptranBlocks.0.0.weight", "srb_2.uptranBlocks.0.0.bias", "srb_2.uptranBlocks.0.1.weight", "srb_2.uptranBlocks.1.0.weight", "srb_2.uptranBlocks.1.0.bias", "srb_2.uptranBlocks.1.1.weight", "srb_2.uptranBlocks.2.0.weight", "srb_2.uptranBlocks.2.0.bias", "srb_2.uptranBlocks.2.1.weight", "srb_2.uptranBlocks.3.0.weight", "srb_2.uptranBlocks.3.0.bias", "srb_2.uptranBlocks.3.1.weight", "srb_2.uptranBlocks.4.0.weight", "srb_2.uptranBlocks.4.0.bias", "srb_2.uptranBlocks.4.1.weight", "srb_2.downtranBlocks.0.0.weight", "srb_2.downtranBlocks.0.0.bias", "srb_2.downtranBlocks.0.1.weight", "srb_2.downtranBlocks.1.0.weight", "srb_2.downtranBlocks.1.0.bias", "srb_2.downtranBlocks.1.1.weight", "srb_2.downtranBlocks.2.0.weight", "srb_2.downtranBlocks.2.0.bias", "srb_2.downtranBlocks.2.1.weight", "srb_2.downtranBlocks.3.0.weight", "srb_2.downtranBlocks.3.0.bias", "srb_2.downtranBlocks.3.1.weight", "srb_2.downtranBlocks.4.0.weight", "srb_2.downtranBlocks.4.0.bias", "srb_2.downtranBlocks.4.1.weight", "srb_2.compress_out.0.weight", "srb_2.compress_out.0.bias", "srb_2.compress_out.1.weight", "out_1.0.0.weight", "out_1.0.0.bias", "out_1.0.1.weight", "out_1.1.0.weight", "out_1.1.0.bias", "out_1.1.1.weight", "out_1.2.0.weight", "out_1.2.0.bias", "out_1.2.1.weight", "conv_out_1.0.0.weight", "conv_out_1.0.0.bias", "conv_out_1.1.0.weight", "conv_out_1.1.0.bias", "conv_out_1.2.0.weight", "conv_out_1.2.0.bias", "out_2.0.0.weight", "out_2.0.0.bias", "out_2.0.1.weight", "out_2.1.0.weight", "out_2.1.0.bias", "out_2.1.1.weight", "out_2.2.0.weight", "out_2.2.0.bias", "out_2.2.1.weight", "conv_out_2.0.0.weight", "conv_out_2.0.0.bias", "conv_out_2.1.0.weight", "conv_out_2.1.0.bias", "conv_out_2.2.0.weight", "conv_out_2.2.0.bias", "block_over_0.re_guide.0.weight", "block_over_0.re_guide.0.bias", "block_over_0.re_guide.1.weight", "block_under_0.re_guide.0.weight", "block_under_0.re_guide.0.bias", "block_under_0.re_guide.1.weight", "block_over_1.re_guide.0.weight", "block_over_1.re_guide.0.bias", "block_over_1.re_guide.1.weight", "block_under_1.re_guide.0.weight", "block_under_1.re_guide.0.bias", "block_under_1.re_guide.1.weight", "block_over_2.re_guide.0.weight", "block_over_2.re_guide.0.bias", "block_over_2.re_guide.1.weight", "block_under_2.re_guide.0.weight", "block_under_2.re_guide.0.bias", "block_under_2.re_guide.1.weight".
Unexpected key(s) in state_dict: "conv_in_over_1.0.weight", "conv_in_over_1.0.bias", "conv_in_over_1.1.weight", "feat_in_over_1.0.weight", "feat_in_over_1.0.bias", "feat_in_over_1.1.weight", "out_over_1.0.weight", "out_over_1.0.bias", "out_over_1.1.weight", "conv_out_over_1.0.weight", "conv_out_over_1.0.bias", "conv_in_under_1.0.weight", "conv_in_under_1.0.bias", "conv_in_under_1.1.weight", "feat_in_under_1.0.weight", "feat_in_under_1.0.bias", "feat_in_under_1.1.weight", "out_under_1.0.weight", "out_under_1.0.bias", "out_under_1.1.weight", "conv_out_under_1.0.weight", "conv_out_under_1.0.bias", "conv_in_over_2.0.weight", "conv_in_over_2.0.bias", "conv_in_over_2.1.weight", "feat_in_over_2.0.weight", "feat_in_over_2.0.bias", "feat_in_over_2.1.weight", "out_over_2.0.weight", "out_over_2.0.bias", "out_over_2.1.weight", "conv_out_over_2.0.weight", "conv_out_over_2.0.bias", "conv_in_under_2.0.weight", "conv_in_under_2.0.bias", "conv_in_under_2.1.weight", "feat_in_under_2.0.weight", "feat_in_under_2.0.bias", "feat_in_under_2.1.weight", "out_under_2.0.weight", "out_under_2.0.bias", "out_under_2.1.weight", "conv_out_under_2.0.weight", "conv_out_under_2.0.bias", "conv_in_over_3.0.weight", "conv_in_over_3.0.bias", "conv_in_over_3.1.weight", "feat_in_over_3.0.weight", "feat_in_over_3.0.bias", "feat_in_over_3.1.weight", "block_over_3.compress_in.0.weight", "block_over_3.compress_in.0.bias", "block_over_3.compress_in.1.weight", "block_over_3.upBlocks.0.0.weight", "block_over_3.upBlocks.0.0.bias", "block_over_3.upBlocks.0.1.weight", "block_over_3.upBlocks.1.0.weight", "block_over_3.upBlocks.1.0.bias", "block_over_3.upBlocks.1.1.weight", "block_over_3.upBlocks.2.0.weight", "block_over_3.upBlocks.2.0.bias", "block_over_3.upBlocks.2.1.weight", "block_over_3.upBlocks.3.0.weight", "block_over_3.upBlocks.3.0.bias", "block_over_3.upBlocks.3.1.weight", "block_over_3.upBlocks.4.0.weight", "block_over_3.upBlocks.4.0.bias", "block_over_3.upBlocks.4.1.weight", "block_over_3.upBlocks.5.0.weight", "block_over_3.upBlocks.5.0.bias", "block_over_3.upBlocks.5.1.weight", "block_over_3.downBlocks.0.0.weight", "block_over_3.downBlocks.0.0.bias", "block_over_3.downBlocks.0.1.weight", "block_over_3.downBlocks.1.0.weight", "block_over_3.downBlocks.1.0.bias", "block_over_3.downBlocks.1.1.weight", "block_over_3.downBlocks.2.0.weight", "block_over_3.downBlocks.2.0.bias", "block_over_3.downBlocks.2.1.weight", "block_over_3.downBlocks.3.0.weight", "block_over_3.downBlocks.3.0.bias", "block_over_3.downBlocks.3.1.weight", "block_over_3.downBlocks.4.0.weight", "block_over_3.downBlocks.4.0.bias", "block_over_3.downBlocks.4.1.weight", "block_over_3.downBlocks.5.0.weight", "block_over_3.downBlocks.5.0.bias", "block_over_3.downBlocks.5.1.weight", "block_over_3.uptranBlocks.0.0.weight", "block_over_3.uptranBlocks.0.0.bias", "block_over_3.uptranBlocks.0.1.weight", "block_over_3.uptranBlocks.1.0.weight", "block_over_3.uptranBlocks.1.0.bias", "block_over_3.uptranBlocks.1.1.weight", "block_over_3.uptranBlocks.2.0.weight", "block_over_3.uptranBlocks.2.0.bias", "block_over_3.uptranBlocks.2.1.weight", "block_over_3.uptranBlocks.3.0.weight", "block_over_3.uptranBlocks.3.0.bias", "block_over_3.uptranBlocks.3.1.weight", "block_over_3.uptranBlocks.4.0.weight", "block_over_3.uptranBlocks.4.0.bias", "block_over_3.uptranBlocks.4.1.weight", "block_over_3.downtranBlocks.0.0.weight", "block_over_3.downtranBlocks.0.0.bias", "block_over_3.downtranBlocks.0.1.weight", "block_over_3.downtranBlocks.1.0.weight", "block_over_3.downtranBlocks.1.0.bias", "block_over_3.downtranBlocks.1.1.weight", "block_over_3.downtranBlocks.2.0.weight", "block_over_3.downtranBlocks.2.0.bias", "block_over_3.downtranBlocks.2.1.weight", "block_over_3.downtranBlocks.3.0.weight", "block_over_3.downtranBlocks.3.0.bias", "block_over_3.downtranBlocks.3.1.weight", "block_over_3.downtranBlocks.4.0.weight", "block_over_3.downtranBlocks.4.0.bias", "block_over_3.downtranBlocks.4.1.weight", "block_over_3.compress_out.0.weight", "block_over_3.compress_out.0.bias", "block_over_3.compress_out.1.weight", "out_over_3.0.weight", "out_over_3.0.bias", "out_over_3.1.weight", "conv_out_over_3.0.weight", "conv_out_over_3.0.bias", "conv_in_under_3.0.weight", "conv_in_under_3.0.bias", "conv_in_under_3.1.weight", "feat_in_under_3.0.weight", "feat_in_under_3.0.bias", "feat_in_under_3.1.weight", "block_under_3.compress_in.0.weight", "block_under_3.compress_in.0.bias", "block_under_3.compress_in.1.weight", "block_under_3.upBlocks.0.0.weight", "block_under_3.upBlocks.0.0.bias", "block_under_3.upBlocks.0.1.weight", "block_under_3.upBlocks.1.0.weight", "block_under_3.upBlocks.1.0.bias", "block_under_3.upBlocks.1.1.weight", "block_under_3.upBlocks.2.0.weight", "block_under_3.upBlocks.2.0.bias", "block_under_3.upBlocks.2.1.weight", "block_under_3.upBlocks.3.0.weight", "block_under_3.upBlocks.3.0.bias", "block_under_3.upBlocks.3.1.weight", "block_under_3.upBlocks.4.0.weight", "block_under_3.upBlocks.4.0.bias", "block_under_3.upBlocks.4.1.weight", "block_under_3.upBlocks.5.0.weight", "block_under_3.upBlocks.5.0.bias", "block_under_3.upBlocks.5.1.weight", "block_under_3.downBlocks.0.0.weight", "block_under_3.downBlocks.0.0.bias", "block_under_3.downBlocks.0.1.weight", "block_under_3.downBlocks.1.0.weight", "block_under_3.downBlocks.1.0.bias", "block_under_3.downBlocks.1.1.weight", "block_under_3.downBlocks.2.0.weight", "block_under_3.downBlocks.2.0.bias", "block_under_3.downBlocks.2.1.weight", "block_under_3.downBlocks.3.0.weight", "block_under_3.downBlocks.3.0.bias", "block_under_3.downBlocks.3.1.weight", "block_under_3.downBlocks.4.0.weight", "block_under_3.downBlocks.4.0.bias", "block_under_3.downBlocks.4.1.weight", "block_under_3.downBlocks.5.0.weight", "block_under_3.downBlocks.5.0.bias", "block_under_3.downBlocks.5.1.weight", "block_under_3.uptranBlocks.0.0.weight", "block_under_3.uptranBlocks.0.0.bias", "block_under_3.uptranBlocks.0.1.weight", "block_under_3.uptranBlocks.1.0.weight", "block_under_3.uptranBlocks.1.0.bias", "block_under_3.uptranBlocks.1.1.weight", "block_under_3.uptranBlocks.2.0.weight", "block_under_3.uptranBlocks.2.0.bias", "block_under_3.uptranBlocks.2.1.weight", "block_under_3.uptranBlocks.3.0.weight", "block_under_3.uptranBlocks.3.0.bias", "block_under_3.uptranBlocks.3.1.weight", "block_under_3.uptranBlocks.4.0.weight", "block_under_3.uptranBlocks.4.0.bias", "block_under_3.uptranBlocks.4.1.weight", "block_under_3.downtranBlocks.0.0.weight", "block_under_3.downtranBlocks.0.0.bias", "block_under_3.downtranBlocks.0.1.weight", "block_under_3.downtranBlocks.1.0.weight", "block_under_3.downtranBlocks.1.0.bias", "block_under_3.downtranBlocks.1.1.weight", "block_under_3.downtranBlocks.2.0.weight", "block_under_3.downtranBlocks.2.0.bias", "block_under_3.downtranBlocks.2.1.weight", "block_under_3.downtranBlocks.3.0.weight", "block_under_3.downtranBlocks.3.0.bias", "block_under_3.downtranBlocks.3.1.weight", "block_under_3.downtranBlocks.4.0.weight", "block_under_3.downtranBlocks.4.0.bias", "block_under_3.downtranBlocks.4.1.weight", "block_under_3.compress_out.0.weight", "block_under_3.compress_out.0.bias", "block_under_3.compress_out.1.weight", "out_under_3.0.weight", "out_under_3.0.bias", "out_under_3.1.weight", "conv_out_under_3.0.weight", "conv_out_under_3.0.bias", "conv_in_under_0.1.weight".
size mismatch for block_over_0.compress_in.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for block_under_0.compress_in.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).

Models

Is there a trained model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.