Giter Club home page Giter Club logo

led's Introduction

ICCV23_LED_LOGO

๐Ÿ’ก LED: Lighting Every Darkness in Two Pairs!

This repository contains the official implementation of the following paper:

Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising
Xin Jin*, Jia-Wen Xiao*, Ling-Hao Han, Chunle Guo#, Ruixun Zhang, Xialei Liu, Chongyi Li
(* denotes equal contribution.)
In ICCV 2023

[Homepage] [Paper] [Google Drive / Baidu Clould] [็ŸฅไนŽ] [Poster] [Slides] [Video]

Comparaison with Calibration-Based Method ICCV23_LED_TEASER0

LED is a Calibration-Free Pipeline for RAW Denoising (currently for extremely low-light conditions).

So tired of calibrating the noise model? Try our LED!
Achieveing SOTA performance in 2 paired data and training time less than 4mins!

ICCV23_LED_TEASER1 ICCV23_LED_TEASER2
More TeaserICCV23_LED_TEASER3
More TeaserICCV23_LED_TEASER4

๐Ÿ“ฐ News

Future work can be found in todo.md.

  • Sep 27, 2023: Add the urls to our Poster, Slides and Video.
  • Aug 19, 2023: Release relevent files on Baidu Clould.
  • Aug 15, 2023: For faster benchmark, we released the relevant files in commit fadffc7.
  • Aug, 2023: We released a Chinese explanation of our paper on ็ŸฅไนŽ.
  • Aug, 2023: Our code is publicly available!
  • July, 2023: Our paper "Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising" has been accepted by ICCV 2023.

๐Ÿ”ง Dependencies and Installation

  1. Clone and enter the repo:
    git clone https://github.com/Srameo/LED.git ICCV23-LED
    cd ICCV23-LED
  2. Simply run the install.sh for installation! Or refer to install.md for more details.

    We use the customized rawpy package in ELD, if you don't want to use it or want to know more information, please move to install.md

    bash install.sh
  3. Activate your env and start testing!
    conda activate LED-ICCV23

โœจ Pretrained Models

If your requirement is for academic research and you would like to benchmark our method, please refer to pretrained-models.md, where we have a rich variety of models available across a diverse range of methods, training strategies, pre-training, and fine-tuning models.

We are currently dedicated to training an exceptionally capable network that can generalize well to various scenarios using only two data pairs! We will update this section once we achieve our goal. Stay tuned and look forward to it!
Or you can just use the following pretrained LED module for custumizing on your own cameras! (please follow the instruction in Quick Demo).

Method Noise Model Phase Framework Training Strategy Additional Dgain (ratio) Camera Model Validation on ๐Ÿ”— Download Links Config File
LED ELD (5 Virtual Cameras) Pretrain UNet PMN 100-300 - - [Google Drive] [options/LED/pretrain/MM22_PMN_Setting.yaml]
LED ELD (5 Virtual Cameras) Pretrain UNet ELD 100-300 - - [Google Drive] [options/LED/pretrain/CVPR20_ELD_Setting.yaml]
LED ELD (5 Virtual Cameras) Pretrain UNet ELD 1-200 - - [Google Drive] [options/LED/pretrain/CVPR20_ELD_Setting_Ratio1-200.yaml]

๐Ÿ“ท Quick Demo

Get Clean Images in the Dark!

We provide a script for testing your own RAW images in image_process.py.
You could run python scripts/image_process.py --help to get detailed information of this scripts.

If your camera model is one of {Sony A7S2, Nikon D850}, you can found our pretrained model in pretrained-models.md.

usage: image_process.py [-h] -p PRETRAINED_NETWORK --data_path DATA_PATH [--save_path SAVE_PATH] [-opt NETWORK_OPTIONS] [--ratio RATIO] [--target_exposure TARGET_EXPOSURE] [--bps BPS] [--led]

optional arguments:
  -h, --help            show this help message and exit
  -p PRETRAINED_NETWORK, --pretrained_network PRETRAINED_NETWORK
                        the pretrained network path.
  --data_path DATA_PATH
                        the folder where contains only your raw images.
  --save_path SAVE_PATH
                        the folder where to save the processed images (in rgb), DEFAULT: 'inference/image_process'
  -opt NETWORK_OPTIONS, --network_options NETWORK_OPTIONS
                        the arch options of the pretrained network, DEFAULT: 'options/base/network_g/unet.yaml'
  --ratio RATIO, --dgain RATIO
                        the ratio/additional digital gain you would like to add on the image, DEFAULT: 1.0.
  --target_exposure TARGET_EXPOSURE
                        Target exposure, activate this will deactivate ratio.
  --bps BPS, --output_bps BPS
                        the bit depth for the output png file, DEFAULT: 16.
  --led                 if you are using a checkpoint fine-tuned by our led.

Fine-tune for Your Own Camera!

A detailed doc can be found in issue#8.

  1. Collect noisy-clean image pairs for your camera model, please follow the insruction in demo.md.
  2. Select a LED Pretrained model in our model zoo (based on the additional dgain you want to add on the image), and fine-tune it using your data!
    python scripts/cutomized_denoiser.py -t [TAG] \
                                         -p [PRETRAINED_LED_MODEL] \
                                         --dataroot your/path/to/the/pairs \
                                         --data_pair_list your/path/to/the/txt
    # Then the checkpoints can be found in experiments/[TAG]/models
    # If you are a seasoned user of BasicSR, you can use "--force_yml" to further fine-tune the details of the options.
  3. Get ready and test your denoiser! (move to Get Clean Images in the Dark!).

๐Ÿค– Training and Evaluation

Please refer to benchmark.md to learn how to benchmark LED, how to train a new model from scratch.

๐Ÿšง Further Development

If you would like to develop/use LED in your projects, welcome to let us know. We will list your projects in this repository.
Also, we provide useful tools for your futher development, please refer to develop.md.

๐Ÿ“– Citation

If you find our repo useful for your research, please consider citing our paper:

@inproceedings{jiniccv23led,
    title={Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising},
    author={Jin, Xin and Xiao, Jia-Wen and Han, Ling-Hao and Guo, Chunle and Zhang, Ruixun and Liu, Xialei and Li, Chongyi},
    journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
    year={2023}
}

๐Ÿ“œ License

This code is licensed under the Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Please note that any commercial use of this code requires formal permission prior to use.

๐Ÿ“ฎ Contact

For technical questions, please contact xjin[AT]mail.nankai.edu.cn and xiaojw[AT]mail.nankai.edu.cn.

For commercial licensing, please contact cmm[AT]nankai.edu.cn.

๐Ÿค Acknowledgement

This repository borrows heavily from BasicSR, Learning-to-See-in-the-Dark and ELD.
We would like to extend heartfelt gratitude to Ms. Li Xinru for crafting the exquisite logo for our project.

led's People

Contributors

srameo avatar schuy1er avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.