dmarnerides / hdr-expandnet Goto Github PK
View Code? Open in Web Editor NEWTraining and inference code for ExpandNet
License: Other
Training and inference code for ExpandNet
License: Other
Where can I get the dataset which is needed for training.
I wanna to train a model by myself, but I couldn't find the dataset used in your paper. Could you provide the dataset that uesd in your paper?
Is there a way to specify absolute or relative path for input folder so that all images in input folder will be processed? If not, can it be added? There is already --out option for specifying out dir, but no similar option for specifying input directory.
E.g. Tried "python expand.py in_dir --out results" or "python expand.py in_dir*.png --out results" ; it doesn't work as we get error message saying "Could not load in_dir" or "Could not load in_dir*.png", though folder in_dir exists at correct location.
In the pre-processing stage, the range of HDR is set at [0,1] by np.interp() after reading HDR, and the same is true when saving HDR image. Won't the information of HDR image be lost in this way?
Hi,
Thank you for sharing this great work!
I wonder which version of PyTorch do you use in this project?
Hey,
How can I change the amount of iterations. It stayes at 0its
Epoch 264: 0it [00:00, ?it/s]
Can you tell me how many pictures of a given hdr input folder, does the mashine compute? I did not found the anser yet.
greeting Nico :)
According to the paper,
All the images contained linear RGB values
Is the pre-model (./weights.pth) trained in linear RGB domain?
In this case, which domain is the input (image/video) of expand.py? It is assumed to be non-linear and processed to linear?
Hi,please,RAW image is dng format, can be directly used for training?How does the code read dng images?
Hello,
I have noticed the prediction images contain darkened first line (and also the left and right borders).
It might be caused by zero-padding in the Conv2D layer, but I have not investigated the issue in-depth.
Original pixel values (a screenshot of the tev image viewer):
Prediction result with the artifact:
Please let me know in case you need more information.
Hi, I use the datasets provided by a CVPR2020 paper "Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline" to train ExpandNet, but the loss value never decreases, and the generated image is all black image. I am sure that I haven't changed anything except the training data. Why is that? Is it the datasets that's causing the problem?
Thank you for your work.
I have a question about LDR video dataset.
Can you share dataset which you used for video test?
It gives same LDR output
Hello, @dmarnerides
I notice that ExpandNet produces an image range in [0,1], and some other methods(Like HDRCNN) range in [0,n], and the paper have mentioned that there are two ways to scale the hdr images, which are display-referred and scene-referred.
So, if I get it right, for display-referred, first normalize the hdr images to [0,1] by divide n(for those range in[0,n] ), then *1024, then evaluate them.
for scene-referred, linearly scaling the outputs of ExpandNet to match the max value of the ground true exr/hdr image, then evaluate them.
Am I right?
If not, it would be very helpful if you can provide some code to get the result of Table 2/3 in the paper.
Thanks!
Hi, @dmarnerides
Your work is really impressive!
Here I have a problem when comparing the results with other methods. Supposing the range of the ground truth is [0, 1000], then different methods may get a different range of the predicted image, either from [0, 1] or [0, n]. From Issue #9, I can see that there are two methods to scale the range of the image in your network, I would like to know whether you scale the range of images predicted by other methods before computing the score of PSNR, SSIM, etc? If so, how did you do it? Thanks a lot!
Hi, I want to know how do you get the in-house HDR images? If you use a camera to take those picture and how do you make sure those images are HDR images? Thank~~
Hello, I have been studying your paper recently. Can you share the training code? Thank you!
If I want to use durand tmo it errors out.
I ran this code, but i dont get any luminance in my hdr output.
This would help jump start testing the network for our research. Thanks very much.
Author is working on his PhD and has not had time to clean up the training code. This project may be a good reference for people looking to train: https://github.com/echolijinghui/ExpandNet
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.