kai-46 / physg Goto Github PK
View Code? Open in Web Editor NEWCode for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing
License: MIT License
Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing
License: MIT License
It seems that the ground truth images saved by plot_to_disk
look different from them showed in tev.exe and paper. I find that the images saved by plot_to_disk
are not corrected by gamma (gamma = 1) while images in tev.exe and paper are corrected by gamma, which is 2.2.
Should I set gamma in args when training? Or ignore it and just do gamma correction when visualization?
Dear authors, I noticed that you did experiments on DTU dataset. But I am wondering how to prepare the proper DTU Dataset to fit the model. Could you please provide the preprocessed scan114 buddha object
dataset or provide the script to prepare the DTU dataset. Thx.
The data used in your work is in the .exr format. I think it is synthesized from the original NeRF_synthetic and the light map. But I really need to know how this is done, if you can provide a method or some information, I will be very grateful
set gamma=2.2 get dark albedo for png images
Hi, thanks for this amazing work. However, aside from the sample data, how can I get the full dataset of your project?
The rendering results look good but the mean PSNR, which is computed by evaluation/eval.py, is different with the result report in the paper. Specially, I evalled the result of kitty and used same lighting as training. The mean PSNR is about 22 while the PSNR in the paper is 36.45.
And here is the reason:
tonemap_img
, the gamma is 1.0.calculate_psnr
in eval.py Line 396 only calculates MSE on masked area instead of the whole picture. mse = np.mean((img1 - img2)**2) * (img2.shape[0] * img2.shape[1]) / mask.sum()
at Line 400 should be modified as mse = np.mean((img1 - img2)**2)
I use you code on DTU dataset, however, the generated meshes are different from ground truth meshes. Would I need to change the default config or change the parameters in training?
Hi and congratulations for your interesting work!
I would like to ask you about the participation of the IDR rendering module in the optimization process. From reading the paper I assumed that material properties (including the diffuse albedo) are estimated by the MaterialEnvmap module you introduce in your paper.
However, in your code , you utilize both your module and IDR's rendering module (which both estimate rgb values basically). In addition, color values calculated by both IDR's rendering module and the MaterialEnvmap module participate in the loss calculation.
Do you train both modules (IDR Rendering and MaterialEnvmap) for a specific reason, or just for comparison?
PS: In addition, I can't seem to find if EnvmapMaterial module's weights are initialized. Could you help?
Thanks for your time.
Hi, I was running experiments on the bear dataset, and I realized on the back of the bear there is a large section of incorrectly swelling section of mesh, as well as an overhanding section of volume on top of the bear. I wonder whether this is expect? This is not shown in the supplementary material, is there some additional config that's different from the one that we use for Kitty in order to better train the dataset of bear?
Hello zhangkai, thanks for your amazing work!
Thanks for releasing code for this awesome work so I get the output mesh(.obj/.ply), normal(.png) from one perspective, can you provide some suggestions for convert these files to assets in blender or UE?
Hi,
Would you like to share how you generate mask, normal map and depth map for your synthetic dataset? Just as reference. Did you also used background removal tools? Thanks!
Hi,
I am trying to use your method on our data, but it looks like the pose estimation module drawn from nerfplusplus's does not seem to retrieve the poses and point-cloud correctly (though point cloud is not relevant to this method). I assume the issue is because the input frames being specular. It makes it hard for COLAMP to estimate the pose.
How did you circumvent this problem?
[attaching some frames]
Hi Kai,
Thanks for releasing code for this awesome work! We are trying to apply physg to nerf images which are basically png files instead of exr files. I wonder in such a case, how do we set the gamma? Could we simply set it as 2.2 for png/jpg images that are in sRGB space?
Thank you!
Best,
Wenzheng
Hi Kai, I really appreciate your excellent work!
I just have a question about the implementation of sg rendering code: here
final_lobes, final_lambdas, final_mus = lambda_trick(lgtSGLobes, lgtSGLambdas, lgtSGMus, warpBrdfSGLobes, warpBrdfSGLambdas, warpBrdfSGMus)
warpBrdfSGLobes
here is the reflection direction of views, rather than surface normals.
But in the paper, Eq(8) tells that the Gaussian is aligned with surface normal. Is this a little bug or something I misunderstand?
By the way, I am looking forward to your future works!
Thanks for the great work and releasing the code!
A quick question for the renderer: in line 175 of sg_rendering.py, why did you use
Appreciate for your great work!
I am trying to run the repo on 2080Ti/3090/A30 and meet an error "CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm...". In case that someone else meet the same problem, here is the record of my solution.
According to google, one of the possible solution is to upgrade cuda10 to cuda11. Upgrade the cuda_toolkit in anaconda from 10.2 to 11.* should solve the problem. However, due to the strict version description in .yml a bug of anaconda, the upgrade cannot be conducted automatically by anaconda. Just manually install cuda11, pytorch and other dependence ( install each one when meet "No module named xxx" ...) from scratch.
Hi authors,
I have been trying to work with Mitsuba to render objects, given different environment maps. I am doing this to understand the values of different PBR elements (roughness, specular, etc.) and intrinsic and extrinsic parameters for IDR training. There are two challenges I am facing:
Can you provide the full data (atleast for kitty) or provide the rendering code (to check how to set camera parameters and other PBR elements)?
hi @Kai-46 ,
thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?
Really hope to get your reply, thx in advance!
Hello,
I am trying your example and i have the following error :
PhySG-master\code\model\ray_tracing.py", line 265, in rootfind
while work_mask.any() & i < self.n_rootfind_steps:
RuntimeError: result type Long can't be cast to the desired output type Bool
Can you help?
Best regards
Hi guys! I also meet the problem that diffuse albedo become dark when training with the custom data. Here is my experience about when this could happen:
And here is my way to avoid this situation: initialze specular albedo as a tiny value, for example, 0.1.
Hope this can help!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.