Giter Club home page Giter Club logo

gnerf's People

Contributors

quan-meng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gnerf's Issues

Experiments on DTU and blender dataset: blurry outputs, mode collapse

Thanks for sharing the implementation! This is a pretty interesting work!

I'm trying to reproduce the results on DTU and have several questions:

  1. What is the size of images used for the experiments? In config/dtu.yaml, the image size is [500, 400] by default. However, in the datasets.py (

    assert self.img_wh_original[1] * self.img_wh[0] == self.img_wh_original[0] * self.img_wh[1], \
    ), the size is enforced to be proportional to the original size which is 1600x1200 for DTU, thus [500, 400] does not work. Should the size be [400, 300] instead? I set the image size to be [400, 300] in my following experiments.

  2. I got pretty blurry synthesis results on DTU, for example, on scan63, after 30K iters, I got the following results,
    image
    on scan4, after 30K iters,
    image

  3. What are the pose estimation scores (rotation and translation errors) on DTU dataset?

train result

There was an error while trying to return the code to my own blender data similar to nerf_synthetic data. It was confirmed that all the results derived from phase ABAB (from 12k) did not converge and splashed. (continued until phase b) Do you know why this is happening? I trained by python train.py ./config/blender.yaml --data_dir PATH/TO/DATASET and the data has same forms just like the nerf_synthetic data.
Thank you for the code.

jay_gnerf_phase_ABAB

GIF result on hotdog & lego dataset: all is white

Hello authors,
I have read the GNeRF paper recently and try to re-product the results on blender hotdog and lego dataset, with released code in this repository and default settings. But I find that the output gif result is all white after at least 30k training iterations.

I have also read the previous issues proposed previously here. The author say that it happens that the GAN part training fails and leads to all-white results. I wonder whether it is normal for the GNeRF to fail on GAN training?

Output GIF is All NONE

Dear Authors,
Thanks for the amazing work.

I have some problem for the network result.
In the end, the generator and discriminator look like convergence. But the result gif is all NONE. (rgb gif is all white, depth gif is all Black)
I am so confuse about that.

What I have changed before training this network is only changed the batch size from 12 to 6. Does this change make this err? :)

ps: I use the blender drums dataset for training.
one GPU 2080Ti
Training for almost 4 days

Thanks for your help.

wgan loss or standard loss

Thanks for your great work, we all like it!

Do you have any advice on the loss function, which do you prefer, wgan loss or standard loss?
I find in released code that the standard loss is the default option for you, have you test wgan loss ?

thanks a lot !

Pose Distribution Prior

Dear Authors,

Thanks for the interesting work and releasing the code.
I was wondering about the advice that you've put in the readme on training with our own data.
Assuming that I only have an image dataset, how can I 1) find this suitable prior distribution, 2) train your model on it?

Thanks for your help in advance.

problem on image size

Dear Authors,
Thanks for the amazing work.

I have some problem when the size of image h not equal to w, when cal psnr on phase B the render image is [w,h], but the real img is [h,w], this problem also show on tensorboard 'rgb'
detail:
similarity.py
mse
value = (image_pred - image_gt) ** 2

Thanks for your help.

Code Release

Thanks for sharing such a great work!

I'm looking forward to the code release. It would be better if the code is implemented with PyTorch.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.