Giter Club home page Giter Club logo

jax3d's Introduction

Jax3D

Unittests PyPI version

See the jax3d/projects/ folder, and mention these users in your issues: - @drebain for jax3d/projects/generative

  • @czq142857 for jax3d/projects/mobilenerf
  • @vorasaurus for jax3d/projects/nesf

This is not an official Google Product.

jax3d's People

Contributors

andsteing avatar chiamp avatar conchylicultor avatar duckworthd avatar hawkinsp avatar hbq1 avatar ivyzx avatar lukegb avatar mattjj avatar qwlouse avatar rchen152 avatar superbobry avatar taiya avatar yilei avatar zhiqincgoogle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jax3d's Issues

How to improve the performance further?

Hi, author.
I run the code on my own real360 scene and result is like this:
17
To improve the performance, I modify the patchsize from 17 to 33 as the paper says. But the result is out of the expectation.
33
Furthermore, I change the patchsize to 9 and the result is like this.
9
I have no idea what went wrong. Looking forward to ur early reply. Thanx in advance.

[MobileNeRF] About the HTML of real360

I tried a custom real360 scene, and the render functions in python code can render the images in all the stages. However, when using the html, I only got the fragments of mesh. How can I solve this problem? Thanks a lot!
0

Deeplab v3 pretrained model

Hi,

Regarding NeSF paper.
Can you please elaborate how did you train Deeplab-v3 model to get your results?
It will also be great if you can share the pretrained model for the three datasets.

Thanks,
Leo

about an introduction on usage

Hi, developers,

Thanks a lot for releasing the package. Would you also provide a colab ipynb notebook for its basic usage? e.g. similar to the walk-through in TF version of NeRF.

Thanks~

About the supplementary material in mobilenerf paper

hi, mobilenerf is really a great work. when I reading the paper, I noticed that some details are in the supplementary material, but I can not find the supplementary material. Could you provide a supplementary material link?

thank you very much

Please provide suggestions to avoid program being killed

To whom it may concern,

Hello! I'm very interested in the work mobilenerf. But I am not able to finish stage3.py though I reduced the batch size to test_batch_size = 512*n_device

Please take a look at the logs below. May I get some suggestions on how to avoid the issue? Thank you!

[GpuDevice(id=0, process_index=0)]
train
  images: (20, 1000, 1000, 3)
  c2w: (20, 4, 4)
  hwf: (3,)
test
  images: (20, 1000, 1000, 3)
  c2w: (20, 4, 4)
  hwf: (3,)
/root/anaconda3/envs/mobilenerf/lib/python3.9/site-packages/flax/core/scope.py:740: FutureWarning: jax.tree_leaves is deprecated, and will be removed in a future release. Use jax.tree_util.tree_leaves instead.
  abs_value_flat = jax.tree_leaves(abs_value)
/root/anaconda3/envs/mobilenerf/lib/python3.9/site-packages/flax/core/scope.py:741: FutureWarning: jax.tree_leaves is deprecated, and will be removed in a future release. Use jax.tree_util.tree_leaves instead.
  value_flat = jax.tree_leaves(value)
Killed

Mesh obtained with center cube

Hi,
I trained a custom data in all stages and got the meshes etc, but the meshes obtained of the object is inside the centre cube but i want only the object and not the centre cube. How do i extract the object only?

Provide NeRF Eval Instructions

Update documentation to include NeRF eval command i.e. to populate NeRF sigma grids, required for training second phase semantic module (include instructions to install additional required packages, e.g. kubric).

Running MobileNeRF on non-GPU server

Hi, I wanted to ask if mobileNerf can be run on a non-GPU normal server. I mean training it on a local GPU machine and then serving it on a non-GPU server for viewers to view the mobileNerf on their browser.
Thanks

NeSF dataset ground truth labels

Hi,

The ground truth labels are not attached to the published dataset.
For each rgba file, there is a visualized segmentation file, but the colors are not consistent between the segmentation images.
For example, in toybox-5 scene 151 the color of "chair" and "car" are wrong, it happens in each scene where the "table" label is missing.
Can you please publish the ground truth labels use are using? ["1=airplane", "2=car", "3=chair", "4=sofa", "5=table"]

149_00131
151_00010

Testing Nesf

Hi,
How can i test nesf model, can you share command for testing?

TypeError: sub got incompatible shapes for broadcasting

Hi! I run the code with synthetic dataset successfully. And the result shows perfectly. But when it comes to my own real360 dataset, it can't run and gives following exception:

jax._src.traceback_util.UnfilteredStackTrace: TypeError: sub got incompatible shapes for broadcasting: (256, 3), (256, 4).

The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.

Do u know how to fix it? Thanx in advance.

Error during stage3: Unimplemented MHLO

Hi everyone,

I am using Window10 with a single GPU (Quatro RTX8000), I run the code by commenting out the 3 lines python code requiring 8 PGU.
I was able to complete the first 2 stages of training on the Chair dataset, though in my case it took significantly longer than expected (50+ hours). However, I was not able to successfully train stage3, and this error keeps coming out

" Attempting to fetch value instead of handling error UNIMPLEMENTED: Unimplemented MHLO -> HloOpcode: %113 = mhlo.round_nearest_even %112 : tensor<1048576x8xf32>"

Can anyone give me some hint to fix this? Thank you

`python': double free or corruption

HI, I can run the code with synthetic 360 and forwardfacing datasets and generate the final results with mobile-nerf, but when it comes to real360 dataset provided by mip-nerf-360. `python': double free or corruption. I have tried the steps from tensorflow/tensorflow#6968. But I just can't fix it, I'm confused, please help me;
system os: centos 7.2
jax version: 0.3.8
jaxlib version: 0.3.8+cuda11.cudnn82
cuda version :11.2
cudnn version: 0.8.2

test result

The test result html coundn't open, how can I get the result?

some about deeplab

Can i know some detail and source about deeplabv3 train on the datasets,such as the pretrain models,training configs,training time. Because when i try to repetition I can't get the nice result as you show in the paper.

MultiNerf Result Samples

Hello! I'm trying to muster up GPU resources to do some full processing, but until that happens, I'm wondering if you have examples of the final OBJ/PNG products that you can share?

Thanks!

[MobileNerf] Integrating result to unity or omniverse

First, my thanks for releasing the MobileNerf code.

This is regarding the polygon rendering method in the paper. Does this also mean that these results can be directly integrated to tools like Unity or Omniverse? Traditionally, the Nerf results are first converted to meshes and then can probably be imported but the result of the mesh is often terrible.

GPU requirements OOM

Hi,
Just wanted to get some insights why this model takes up so much GPUs as compared to other projects like MultiNERF? Even with fp16 it is really compute hungry. Just wanted to understand from the curiosity perspective. Any leads are appreaciated.

Problem about flax.optim in MobileNerf

When I try to train the mobilenerf model, I find a problem about flax. When I following the guide to install all the lib and trying to run stage1.py, it will give an error module 'flax' has no attribute 'optim'for line 1476 state=flax.optim.Adam(**adam_kwargs).create(model_vars). I find out that flax I installed is version0.6.0, but an older version like 0.5.3 will be ok. So maybe the requirements should make a change?
Thanks :)

data problem

Hi,when i download your dataset of Toys5 on google cloud,I found that there are some pictures missing in it, therefore some scene 's total pictures are lower than 301.And the miss problem is irregular which make me hard to handle it. Would you can tell me some solution ,thank you!!!

Question about a value in MobileNeRF

Hello, thanks for contributing such a great work.
I'm wondering is it possible for us to obtain the "T" values in the compute_undc_intersection in the code, here the "T" values correspond with the value that we express "points = origin + direction * T" in the normal NeRF ray-sampling process. If it's possible, could you leave some idea here?
Thanks again

real360 dataset excluding bkgd

Hi, author.
I've tested the code on synthetic and real360 dataset, and the latter result seems to be better than the former.
Now I want to run with real360 dataset excluding bkgd. I try to fill the bkgd with pure white/black but both failed. It seems that the imgs should be converted into 4-channels and thus the code related needs to be modified. That's what I am doing.
Any help could be offered?

Please provide trained models

Hi,

I would like to try your MobileNerf project on my desktop and mobile phone, but I don't have 8 V100 GPUs to train the models. Probably very few people do...

Can you provide, e.g. as a download on a Google Drive, the trained data that could be placed into the folder that has the HTML, so that people can at least try the demo scenes?

trained model do not live up to expectations

Hi, author. Thanks so much for your releasing the code!
I tested it on RTX3090 and the result was just excellent.
But when it comes to my real dataset, the trained model looks not as good as the previous.
0
Where lies the problem? Increasing the training_iters or other methods to solve it?

[Mobile NeRF] Question: Training on a single GPU

Hello! First of all, congratulations on such an amazing paper and thank you very much for making the code public. I had a question regarding the requirements for training, is there a reason why one shouldn't attempt to train with a single GPU? It says that 8 V100 GPUs are required to "successfully train the model" However, I tested it with a single RTX3090 GPU and the results seem to be around the same as the paper (i.e. Same mesh quality as in the paper and around the same training times mentioned in the repository)

Screenshot from 2022-08-08 10-45-11

for checkpoint and scene split

Can I know how to get the pretrain model like nerf and 3d unet? Or what's you scene split detail for train semantic part?

Disabled tests for projects/nesf

Refer to this run:
https://github.com/google-research/jax3d/runs/8045057467?check_suite_focus=true#step:10:874

flax.optim has been deprecated in favor of optax, and that is making tests fail.
Disabling tests and opening this issue to track, in .github/workflows/test.yml

      run: |
        pytest -vv -n auto \
          --durations=10 \
          --ignore="jax3d/projects/nesf/nerfstatic/integration_test/import_test.py" \
          --ignore="jax3d/projects/nesf/nerfstatic/nerf/utils_test.py" \
          --ignore="jax3d/projects/nesf/nerfstatic/utils/train_utils_test.py"

Trained model for mobilenerf

Hello. Thanks for your great work.
Would you provide checkpoints for your trained model for mobilenerf?
Thanks

Will subjective results on datasets be published?

Hello, congratulations on your wonderful work.
I am going to cite your paper and compare my method to your work mobileNeRF subjectively.
So, I am wondering that are you planning to publish the results on Synthetic 360 and Forward Facing?
Thank you!

MobileNeRF Inference on server side GPU

Hi,

I'm able to do inference successfully with MobileNeRF, and I'm now trying to benchmark it's performance on a few different devices, however many of these are remote servers so I only have terminal access. I've setup an ssh tunnel so I can access the interface remotely, however the code is running on my local machines GPU, not the GPU of the server, which I'm trying to benchmark. Is there anyway I can do inference on the server side, either by doing inference not through the browser, or by running webGL on the server instead of the client?

Thanks!

.

.

Massive difference between stage3 psnr and the resulting mesh

Hey there,

I was using MobileNeRF to use on my own dataset. It works well with the default settings with a good mesh that I can render using the ThreeJS viewer.

However I am facing a problem when trying to tweak the parameters. I wanted a lightweight mesh, so I tried setting
point_grid_size = 48. This reduces the PSNR from 32-ish to the 28dB area which is is fine.

However when I extract the mesh, it is not the 28db quality that was promised. While the pred_frames.pkl images look right, the rendered mesh itself is quite poor.

Here are the changes I made in stage3.py to extract the mesh:

  1. Change point_grid_size = 48
  2. Changed the texture size from 1024*2 to 1008*2. The 1008 comes from the fact that quad_size*layer_num == texture_size so I chose the closest multiple of 48.

Any hints to what might be going wrong?

Out of Memory when trying to train real360 scene

Hi all, I am running this with RTX8000 (48g GPU RAM) with 128G CPU RAM, on the Linux Ubuntu. I was able to successfully complete synthetic and forwardFacing scene, but when I tried to train real360 scene, a bunch of out-of-memory error comes out. Can anyone give me some tips to fix this please? thanks a lot !!!

2022-10-04 22:39:41.417477: W external/org_tensorflow/tensorflow/core/common_runtime/device/device_host_allocator.h:46] could not allocate pinned host memory of size: 17179869184
2022-10-04 22:39:42.742658: E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:796] failed to alloc 17179869184 bytes on host: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2022-10-04 22:39:42.742753: W external/org_tensorflow/tensorflow/core/common_runtime/device/device_host_allocator.h:46] could not allocate pinned host memory of size: 17179869184
2022-10-04 22:39:54.170550: E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:796] failed to alloc 17179869184 bytes on host: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2022-10-04 22:39:54.170644: W external/org_tensorflow/tensorflow/core/common_runtime/device/device_host_allocator.h:46] could not allocate pinned host memory of size: 17179869184
2022-10-04 22:39:55.645782: E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:796] failed to alloc 17179869184 bytes on host: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2022-10-04 22:39:55.645855: W external/org_tensorflow/tensorflow/core/common_runtime/device/device_host_allocator.h:46] could not allocate pinned host memory of size: 17179869184
2022-10-04 22:39:55.645870: W external/org_tensorflow/tensorflow/core/common_runtime/bfc_allocator.cc:479] Allocator (xla_gpu_host_bfc) ran out of memory trying to allocate 720B (rounded to 768)requested by op
2022-10-04 22:39:55.645919: W external/org_tensorflow/tensorflow/core/common_runtime/bfc_allocator.cc:491]

pointnet performance++

Integrate comment from MS (likely to increase performance further)

  • on each device, compute accuracy (on the local sub-batch)
  • on each device (!), perform an all-reduce, i.e. every device exchanges the accuracy information with all other devices, such that the new accuracy variable has the same mean accuracy over the entire batch on every device
  • later, move all of these accuracy values to the host from all devices, and stack them together (i.e., you end up with the same accuracy value repeated num_device times)

For evaluation, you don't need the pmean on all devices. What you probably want is to just return the local accuracy of each sub-batch from each device, and then you can all_gather them on the host & take the mean for logging.

how to combine instant ngp and mobile nerf ?

Dear author,

instant ngp has the advantage that it is very fast to train, is it possbile to convert the trained weight of instant ngp to mobile nerf ? If so, can you give me some directions so that I can try it by myself?

Thanks so much !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.