Giter Club home page Giter Club logo

physg's People

Contributors

kai-46 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

physg's Issues

What is the color space?

It seems that the ground truth images saved by plot_to_disk look different from them showed in tev.exe and paper. I find that the images saved by plot_to_disk are not corrected by gamma (gamma = 1) while images in tev.exe and paper are corrected by gamma, which is 2.2.
Should I set gamma in args when training? Or ignore it and just do gamma correction when visualization?

Preparing DTU Dataset

Dear authors, I noticed that you did experiments on DTU dataset. But I am wondering how to prepare the proper DTU Dataset to fit the model. Could you please provide the preprocessed scan114 buddha object dataset or provide the script to prepare the DTU dataset. Thx.

Where to get the full dataset

Hi, thanks for this amazing work. However, aside from the sample data, how can I get the full dataset of your project?

Gap of PSNR computing between eval.py and the paper

The rendering results look good but the mean PSNR, which is computed by evaluation/eval.py, is different with the result report in the paper. Specially, I evalled the result of kitty and used same lighting as training. The mean PSNR is about 22 while the PSNR in the paper is 36.45.

And here is the reason:

  1. eval.py computes PSNR of images in linear space instead of sRGB space, i.e, dose not apply "I_out = I_in ^ (1/2.2)" described in Tab.3. Although Line 204 runs tonemap_img, the gamma is 1.0.
  2. calculate_psnr in eval.py Line 396 only calculates MSE on masked area instead of the whole picture. mse = np.mean((img1 - img2)**2) * (img2.shape[0] * img2.shape[1]) / mask.sum() at Line 400 should be modified as mse = np.mean((img1 - img2)**2)

Not good result on DTU dataset

I use you code on DTU dataset, however, the generated meshes are different from ground truth meshes. Would I need to change the default config or change the parameters in training?

IDR rendering module

Hi and congratulations for your interesting work!

I would like to ask you about the participation of the IDR rendering module in the optimization process. From reading the paper I assumed that material properties (including the diffuse albedo) are estimated by the MaterialEnvmap module you introduce in your paper.
However, in your code , you utilize both your module and IDR's rendering module (which both estimate rgb values basically). In addition, color values calculated by both IDR's rendering module and the MaterialEnvmap module participate in the loss calculation.

Do you train both modules (IDR Rendering and MaterialEnvmap) for a specific reason, or just for comparison?

PS: In addition, I can't seem to find if EnvmapMaterial module's weights are initialized. Could you help?
Thanks for your time.

Incorrect normal and mesh on the back of bear dataset

Hi, I was running experiments on the bear dataset, and I realized on the back of the bear there is a large section of incorrectly swelling section of mesh, as well as an overhanding section of volume on top of the bear. I wonder whether this is expect? This is not shown in the supplementary material, is there some additional config that's different from the one that we use for Kitty in order to better train the dataset of bear?

Synthetic dataset mask, normal and depth

Hi,

Would you like to share how you generate mask, normal map and depth map for your synthetic dataset? Just as reference. Did you also used background removal tools? Thanks!

Pose estimation is failing

Hi,

I am trying to use your method on our data, but it looks like the pose estimation module drawn from nerfplusplus's does not seem to retrieve the poses and point-cloud correctly (though point cloud is not relevant to this method). I assume the issue is because the input frames being specular. It makes it hard for COLAMP to estimate the pose.

How did you circumvent this problem?

[attaching some frames]

test_00100
test_00115

@Kai-46

gamma settings for png file

Hi Kai,

Thanks for releasing code for this awesome work! We are trying to apply physg to nerf images which are basically png files instead of exr files. I wonder in such a case, how do we set the gamma? Could we simply set it as 2.2 for png/jpg images that are in sRGB space?

Thank you!

Best,
Wenzheng

The lobe of fs is wr rather than normal?

Hi Kai, I really appreciate your excellent work!
I just have a question about the implementation of sg rendering code: here

final_lobes, final_lambdas, final_mus = lambda_trick(lgtSGLobes, lgtSGLambdas, lgtSGMus, warpBrdfSGLobes, warpBrdfSGLambdas, warpBrdfSGMus)

warpBrdfSGLobes here is the reflection direction of views, rather than surface normals.
But in the paper, Eq(8) tells that the Gaussian is aligned with surface normal. Is this a little bug or something I misunderstand?

Screenshot from 2022-10-17 23-43-22

By the way, I am looking forward to your future works!

Setting SG parameter for NDF

Thanks for the great work and releasing the code!

A quick question for the renderer: in line 175 of sg_rendering.py, why did you use $\frac{1}{roughness ^4 \cdot \pi}$ instead of 1 as the value of $\mu$? Are you using Blinn-Phong's model for the shape of D?

Environment Issue

Appreciate for your great work!
I am trying to run the repo on 2080Ti/3090/A30 and meet an error "CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm...". In case that someone else meet the same problem, here is the record of my solution.
According to google, one of the possible solution is to upgrade cuda10 to cuda11. Upgrade the cuda_toolkit in anaconda from 10.2 to 11.* should solve the problem. However, due to the strict version description in .yml a bug of anaconda, the upgrade cannot be conducted automatically by anaconda. Just manually install cuda11, pytorch and other dependence ( install each one when meet "No module named xxx" ...) from scratch.

Rendering using Mitsuba

Hi authors,
I have been trying to work with Mitsuba to render objects, given different environment maps. I am doing this to understand the values of different PBR elements (roughness, specular, etc.) and intrinsic and extrinsic parameters for IDR training. There are two challenges I am facing:

  1. The 3D data that you provide is without texture, so I am not able to generate the data as you are using.
  2. The 3D data that I have is not producing good results with environment map (even though I set specular as 1 and roughness as 0).

Can you provide the full data (atleast for kitty) or provide the rendering code (to check how to set camera parameters and other PBR elements)?

Implementation Question

Hi @Kai-46,
In your actual implementation of the lobe sharpness in Eq. (8) in the figure below, why do you treat the half angle vector $\mathbf{h}$ the same as lobe axis in L185?

Screenshot from 2022-10-17 23-43-22

why i can't get glossy results when training using blender images?

hi @Kai-46 ,
thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?

Really hope to get your reply, thx in advance!

Avoid diffuse albedo to be dark

Hi guys! I also meet the problem that diffuse albedo become dark when training with the custom data. Here is my experience about when this could happen:

  1. the rgb image or environment light is not light enough
  2. the initial specular albedo is big

And here is my way to avoid this situation: initialze specular albedo as a tiny value, for example, 0.1.
Hope this can help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.