swz30 / mirnetv2 Goto Github PK
View Code? Open in Web Editor NEW[TPAMI 2022] Learning Enriched Features for Fast Image Restoration and Enhancement. Results on Defocus Deblurring, Denoising, Super-resolution, and image enhancement
License: Other
[TPAMI 2022] Learning Enriched Features for Fast Image Restoration and Enhancement. Results on Defocus Deblurring, Denoising, Super-resolution, and image enhancement
License: Other
Hi, I am attempting to reproduce the results of this paper in Tensorflow. I have been referring to both the implementation details in mentioned in the paper and the code in this repository for my implementation. I have a doubt regarding the SKFF (selective kernel feature fusion) block. In the paper, it has been stated that in the Fuse operation in SKFF, the multi-scale feature maps are combined using element-wise summation, however, upon referring to this line in the code, I found that it is being concatenated. Can you please clarify my doubt as to whether the features should be summed or concatenated?
Hi, Can you tell me in MIRNet v1 and MIRNet v2 wheather the images need to be normalized?
Is there a possibility in MRB that there is another connection method that works better for SKFF and RCB than the current connection method?
What if we combine the three head SKFF from MIRNet v1 and the SKFF from MIRNet v2?
Have you considered using Tensorrt to accelerate the inference code in the denoising section
output sr image shape is same as input lr image,no super resolution
Can I get experiments about progressive training ?
Is the LR image's definition equal to the HR (SR) image?
When testing DND dataset results, it is necessary to upload the generated results to the official website. Do you directly upload the 50 generated mat files? Why do I display errors after uploading?
Hi @swz30 @adityac8
I have published a tutorial on Kaggle demonstrating a TensorFlow implementation of MIRNetv2 for low-light image enhancement in collaboration with @SauravMaheshkar. We are also working on a complete implementation of MIRNetv2 to reproduce the results on all the tasks mentioned in the paper using TensorFlow and Keras here in collaboration with @ariG23498 and @SauravMaheshkar.
We would be grateful if you can share any feedback on our work.
Thank you for your excellent work!
In the Low-light image enhancement, the reported metrics of Lol dataset are 24.74(PSNR) and 0.851(SSIM). I want to know that how many EPOCH the model trained to reach this metrics
Thank you for your awesome code!
I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch
(and/or batch) with an estimate of the runtime?
In this implementation, for the LOL dataset, it writes "training on 8 GPUs". I'm asking what's the GPU? And how long it takes for training?
Because I want to train it on my own dataset and find that it runs slow using my 3090. Thank you.
Thank you for sharing an excellent work.
I want to apply pre-trained models to the conventional super-resolution dataset such as Set5 and evaluate the performance.
But the output size of the pre-trained modes is not scaled, i. e., the output size is equal to the input size.
How can I scale the output size of the pre-tranined model??
I am looking for the basic color gamut mapping algorithms in your papers: "Gamut Mapping in Cinematography Through Perceptually-Based Contrast Modification" and "Vision Models for Wide Color Gamut Imaging in Cinema", but I failed to find it. Could you share your code with me? Thank you very much. My email is [email protected].
How can i test a task with defocus_deblurring?
Thanks
报错如下:
ModuleNotFoundError:
No module named 'basicsr'
这个basicsr是项目文件夹,导入为啥会报错啊?
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28610) of binary: /home/xtzg/anaconda3/envs/pytorch1.1
我的设置如下
train.sh中设置为:
python -m torch.distributed.run --nproc_per_node=1 --master_port=4321 basicsr/train.py -opt $CONFIG --launcher pytorch
yaml配置文件中也修改shuffle为false
# data loader use_shuffle: false # true num_worker_per_gpu: 0 # 8 batch_size_per_gpu: 1 # 8
hello my friends, your work was excellent, but i have a question: why you use 2 RCBs of each MRB in paper? In paper's Ablation Studies(table 11), 3 RCBs have better result than 2 RCBs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.