Giter Club home page Giter Club logo

burstormer's Issues

Question for Cyclic Burst Sampling in NRFE.

I saw the Cyclic Burst Sampling module in your article, and I think this module is very cleverly designed, but I can't find the definition of the code for this module in your code, can I ask you where this code is? There is one more question, the weights in the trace 1 part you gave me don't seem to match the model, is the weight file uploaded wrongly?
image
thank you

Reg.. testing burst denoising on real dataset

Hello @akshaydudhane16!

I want to test burst denoising on real dataset. I've seen that network takes as an input both noisy image together with its noise_estimate calculated from read and shot noises.

  1. Why are we sending this noise_estimate as an image in concatenation with the noisy image as an input?
  2. If I want to test real image, how can I calculate this noise_estimate?

Can you please clarify the same above..

Thank you
Gopi

Can't reproduce the result for BurstSR dataset(Track2)

Hi, thanks for your wonderful work. I am also working on BurstSR and your codes have helped me a lot.

While I have some problems when I try to fine-tune an SR model on the BurstSR dataset. I firstly train the SR model on the Synthetic dataset and everything is OK. Then I fine-tune this model on the BurstSR dataset. The training loss keeps decreasing while the validation PSNR only grows at the beginning and drops gradually after it achieves the best result.
Below image is PSNR result per epoch. X axis is epoch and Y axis is PSNR.

image

Changed points were as below.

  1. spatial_color_alignment.py : Change lstsq from "c = torch.lstsq(ir.t(), iq.t())" to "c = torch.linalg.lstsq(iq.t(), ir.t())" due to Pytorch version.
  2. correlation.py : Change "@cupy.util.memoize(for_each_device=True)" to "@cupy.memoize(for_each_device=True)" due to version.
  3. PWC pre-train : I used download.bash in pwcnet folder.
  4. Train epoch setting is 25 epoch because your original code was 25 epoch.
  5. Train patch size is 24 and validation patch size is 80 as like your released code.

Have you met the same problem? I would appreciate it a lot if you could help me figure this out.

Thanks.

Question for training time

Thanks for release codes.

I would like to train your Burst SR model using released codes but I want to ask something before training.

In the paper, it seems the model trained with 4 RTX6000 GPUs.

Could you share how long the train took?

Thanks.
Luis Kang.

Checkpoints for Synthetic Burst SR does not match with the model

Dear authors!

Thank you for sharing your work with the community. Recently, I tried to inference Burstormer on Synthetic Burst SR validation set and found that provided checkpoint does not match with the model:

Missing key(s) in state_dict: "back_projection1.feat_fusion.0.weight", "back_projection1.feat_fusion.0.bias", "back_projection1.feat_expand.0.weight", "back_projection1.feat_expand.0.bias", "back_projection2.feat_fusion.0.weight", "back_projection2.feat_fusion.0.bias", "back_projection2.feat_expand.0.weight", "back_projection2.feat_expand.0.bias".
Unexpected key(s) in state_dict: "back_projection1.diff_fusion.weight", "back_projection1.feat_fusion.weight", "back_projection1.feat_expand.weight", "back_projection2.diff_fusion.weight", "back_projection2.feat_fusion.weight", "back_projection2.feat_expand.weight".
size mismatch for align.alignment0.back_projection.encoder1.0.norm1.body.weight: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for align.alignment0.back_projection.encoder1.0.norm1.body.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for align.alignment0.back_projection.encoder1.0.attn.qk.weight: copying a param with shape torch.Size([192, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 48, 1, 1]).

I increased the num. of channels twice in ref_back_projection.encoder1, but still there is a mismatch in no_ref_back_projection. Checkpoint contains diff_fusion block weights, but the Burstormer model does not have it.

Can you please help to resolve this issue?

Question for training of Burst De-noising.(loss function, data generation and training tricks)

Hello,

We noticed that you mentioned in the article that all training utilized L1 Loss. However, in the Burst Denoising section, you referred to the experimental setups of KPN and BPN, where KPN employed the Basic Loss (L2 Loss + gradient loss). We are curious about the Loss function employed in your Burst Denoising.

Furthermore, in Burst Denoising, when converting Open Image data, apart from what was mentioned in KPN, are there any additional steps or specific considerations? During the training phase of Burst Denoising, were there any other training techniques applied? Currently, we are unable to replicate the training accuracy.

We eagerly await your response!

Regarding Batchsize

Dear authors,
thanks for opensource the code!
I noticed that for the network, the fine line in the forward model is always burst = burst[0]
So does the training and inference only support batchsize=1? Or if there is any tricks we can do larger batchsize?
Thanks,
Ke

What is the purpose of the BFF module in the network.

Hello, thank you very much for open source code. What is the function of the BFF module designed in this paper? I read the description of the BFF module in the paper. Unfortunately, I didn't get it. Can the author explain the role of BFF module in more detail? Thank you very much and look forward to your reply.

Download link of the pre-trained weights is wrong. It is BIPNet pre-trained model link.

Hello. Thank you for releasing your impressive CVPR 2023 work, Burstormer!

I have one question about the download links of pre-trained model.

When I go into the trained model page of Burst Super-Resolution, the pre-trained weights is BIPNet, not Burstormer.

I think it is a mistake of the link.

Can you update the link of pre-trained weights to the one of Burstormer?

Thank you.

Question for pre-trained weights link.

Hello, thank you for open source code!

I have trouble using pre-trained weights of Burst Super-resolution page.
There's no weights about back_projection.

I check the similar question posted earlier.
Can you check one more time if it is right for Burstormer's pretrained weights?

Thanks.

Request for Enhancement code release

Hi,
I am very thankful for what you have done!I would like to do some experiments with enhancement function.I hope to see enhancement code to do me this favor.Thank you !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.