Giter Club home page Giter Club logo

hdr-expandnet's People

Contributors

dmarnerides avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hdr-expandnet's Issues

Image range processing in the experiment stage

Hi, @dmarnerides

Your work is really impressive!
Here I have a problem when comparing the results with other methods. Supposing the range of the ground truth is [0, 1000], then different methods may get a different range of the predicted image, either from [0, 1] or [0, n]. From Issue #9, I can see that there are two methods to scale the range of the image in your network, I would like to know whether you scale the range of images predicted by other methods before computing the score of PSNR, SSIM, etc? If so, how did you do it? Thanks a lot!

about hdr image save

In the pre-processing stage, the range of HDR is set at [0,1] by np.interp() after reading HDR, and the same is true when saving HDR image. Won't the information of HDR image be lost in this way?

The domain of pre-train model

According to the paper,

All the images contained linear RGB values

Is the pre-model (./weights.pth) trained in linear RGB domain?
In this case, which domain is the input (image/video) of expand.py? It is assumed to be non-linear and processed to linear?

Question about video dataset

Thank you for your work.
I have a question about LDR video dataset.

Can you share dataset which you used for video test?

How to get the result of Table 2 and Table 3

Hello, @dmarnerides

I notice that ExpandNet produces an image range in [0,1], and some other methods(Like HDRCNN) range in [0,n], and the paper have mentioned that there are two ways to scale the hdr images, which are display-referred and scene-referred.

So, if I get it right, for display-referred, first normalize the hdr images to [0,1] by divide n(for those range in[0,n] ), then *1024, then evaluate them.

for scene-referred, linearly scaling the outputs of ExpandNet to match the max value of the ground true exr/hdr image, then evaluate them.

Am I right?

If not, it would be very helpful if you can provide some code to get the result of Table 2/3 in the paper.

Thanks!

Training, iterations and pictureamount

Hey,

How can I change the amount of iterations. It stayes at 0its
Epoch 264: 0it [00:00, ?it/s]

Can you tell me how many pictures of a given hdr input folder, does the mashine compute? I did not found the anser yet.

greeting Nico :)

About the performance of the model in training

Hi, I use the datasets provided by a CVPR2020 paper "Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline" to train ExpandNet, but the loss value never decreases, and the generated image is all black image. I am sure that I haven't changed anything except the training data. Why is that? Is it the datasets that's causing the problem?

How to get the in-house HDR image?

Hi, I want to know how do you get the in-house HDR images? If you use a camera to take those picture and how do you make sure those images are HDR images? Thank~~

First line almost black

Hello,

I have noticed the prediction images contain darkened first line (and also the left and right borders).
It might be caused by zero-padding in the Conv2D layer, but I have not investigated the issue in-depth.

Example input image:
gcanyon_c_yumapoint_3k exr

Original pixel values (a screenshot of the tev image viewer):
screen shot 2019-02-09 at 16 23 10

Prediction result with the artifact:
screen shot 2019-02-09 at 16 23 13

Please let me know in case you need more information.

results in shadows

Hi,Why are shadows appearing in the test results?(as shown below)
I didn't find the reason for the shadow in the codes. Thanks in advance!
test_indoor

Where can I get the dataset?

I wanna to train a model by myself, but I couldn't find the dataset used in your paper. Could you provide the dataset that uesd in your paper?

about train data formats

Hi,please,RAW image is dng format, can be directly used for training?How does the code read dng images?

ask for source code

Hello, I have been studying your paper recently. Can you share the training code? Thank you!

Can expanded output be in .exr format instead of .hdr format? or need more details on .hdr

  1. Would be great if output can be specified in .exr or other format instead of .hdr ; is it possible?
  2. Curious - how can I learn more about the details of this ".hdr" format e.g. precision per pixel? does it store linear fp16? whats relationship of pixel value and nits? etc. Also, would be great if you can suggest any free image viewer to view .hdr file?
  3. If only ".hdr" supported, can you recommended a tool to convert say to .exr or other format(e.g. tiff)? Or even better - eventually, trying to understand a way to convert sequence of .hdr (or other format) in say HEVC Main 10 video (.mp4) sequence to be able to view it on HDR monitor. Any suggestions?

iterating over png/jpg dataset from a folder

Is there a way to specify absolute or relative path for input folder so that all images in input folder will be processed? If not, can it be added? There is already --out option for specifying out dir, but no similar option for specifying input directory.
E.g. Tried "python expand.py in_dir --out results" or "python expand.py in_dir*.png --out results" ; it doesn't work as we get error message saying "Could not load in_dir" or "Could not load in_dir*.png", though folder in_dir exists at correct location.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.