Giter Club home page Giter Club logo

multinerf's Introduction

MultiNeRF: A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF

This is not an officially supported Google product.

This repository contains the code release for three CVPR 2022 papers: Mip-NeRF 360, Ref-NeRF, and RawNeRF. This codebase was written by integrating our internal implementations of Ref-NeRF and RawNeRF into our mip-NeRF 360 implementation. As such, this codebase should exactly reproduce the results shown in mip-NeRF 360, but may differ slightly when reproducing Ref-NeRF or RawNeRF results.

This implementation is written in JAX, and is a fork of mip-NeRF. This is research code, and should be treated accordingly.

Setup

# Clone the repo.
git clone https://github.com/google-research/multinerf.git
cd multinerf

# Make a conda environment.
conda create --name multinerf python=3.9
conda activate multinerf

# Prepare pip.
conda install pip
pip install --upgrade pip

# Install requirements.
pip install -r requirements.txt

# Manually install rmbrualla's `pycolmap` (don't use pip's! It's different).
git clone https://github.com/rmbrualla/pycolmap.git ./internal/pycolmap

# Confirm that all the unit tests pass.
./scripts/run_all_unit_tests.sh

You'll probably also need to update your JAX installation to support GPUs or TPUs.

Running

Example scripts for training, evaluating, and rendering can be found in scripts/. You'll need to change the paths to point to wherever the datasets are located. Gin configuration files for our model and some ablations can be found in configs/. After evaluating on the test set of each scene in one of the datasets, you can use scripts/generate_tables.ipynb to produce error metrics across all scenes in the same format as was used in tables in the paper.

OOM errors

You may need to reduce the batch size (Config.batch_size) to avoid out of memory errors. If you do this, but want to preserve quality, be sure to increase the number of training iterations and decrease the learning rate by whatever scale factor you decrease batch size by.

Using your own data

Summary: first, calculate poses. Second, train MultiNeRF. Third, render a result video from the trained NeRF model.

  1. Calculating poses (using COLMAP):
DATA_DIR=my_dataset_dir
bash scripts/local_colmap_and_resize.sh ${DATA_DIR}
  1. Training MultiNeRF:
python -m train \
  --gin_configs=configs/360.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" \
  --logtostderr
  1. Rendering MultiNeRF:
python -m render \
  --gin_configs=configs/360.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" \
  --gin_bindings="Config.render_dir = '${DATA_DIR}/render'" \
  --gin_bindings="Config.render_path = True" \
  --gin_bindings="Config.render_path_frames = 480" \
  --gin_bindings="Config.render_video_fps = 60" \
  --logtostderr

Your output video should now exist in the directory my_dataset_dir/render/.

See below for more detailed instructions on either using COLMAP to calculate poses or writing your own dataset loader (if you already have pose data from another source, like SLAM or RealityCapture).

Running COLMAP to get camera poses

In order to run MultiNeRF on your own captured images of a scene, you must first run COLMAP to calculate camera poses. You can do this using our provided script scripts/local_colmap_and_resize.sh. Just make a directory my_dataset_dir/ and copy your input images into a folder my_dataset_dir/images/, then run:

bash scripts/local_colmap_and_resize.sh my_dataset_dir

This will run COLMAP and create 2x, 4x, and 8x downsampled versions of your images. These lower resolution images can be used in NeRF by setting, e.g., the Config.factor = 4 gin flag.

By default, local_colmap_and_resize.sh uses the OPENCV camera model, which is a perspective pinhole camera with k1, k2 radial and t1, t2 tangential distortion coefficients. To switch to another COLMAP camera model, for example OPENCV_FISHEYE, you can run

bash scripts/local_colmap_and_resize.sh my_dataset_dir OPENCV_FISHEYE

If you have a very large capture of more than around 500 images, we recommend switching from the exhaustive matcher to the vocabulary tree matcher in COLMAP (see the script for a commented-out example).

Our script is simply a thin wrapper for COLMAP--if you have run COLMAP yourself, all you need to do to load your scene in NeRF is ensure it has the following format:

my_dataset_dir/images/    <--- all input images
my_dataset_dir/sparse/0/  <--- COLMAP sparse reconstruction files (cameras, images, points)

Writing a custom dataloader

If you already have poses for your own data, you may prefer to write your own custom dataloader.

MultiNeRF includes a variety of dataloaders, all of which inherit from the base Dataset class.

The job of this class is to load all image and pose information from disk, then create batches of ray and color data for training or rendering a NeRF model.

Any inherited subclass is responsible for loading images and camera poses from disk by implementing the _load_renderings method (which is marked as abstract by the decorator @abc.abstractmethod). This data is then used to generate train and test batches of ray + color data for feeding through the NeRF model. The ray parameters are calculated in _make_ray_batch.

Existing data loaders

To work from an example, you can see how this function is overloaded for the different dataloaders we have already implemented:

The main data loader we rely on is LLFF (named for historical reasons), which is the loader for a dataset that has been posed by COLMAP.

Making your own loader by implementing _load_renderings

To make a new dataset, make a class inheriting from Dataset and overload the _load_renderings method:

class MyNewDataset(Dataset):
  def _load_renderings(self, config):
    ...

In this function, you must set the following public attributes:

  • images
  • camtoworlds
  • pixtocams
  • height, width

Many of our dataset loaders also set other useful attributes, but these are the critical ones for generating rays. You can see how they are used (along with a batch of pixel coordinates) to create rays in camera_utils.pixels_to_rays.

Images

images = [N, height, width, 3] numpy array of RGB images. Currently we require all images to have the same resolution.

Extrinsic camera poses

camtoworlds = [N, 3, 4] numpy array of extrinsic pose matrices. camtoworlds[i] should be in camera-to-world format, such that we can run

pose = camtoworlds[i]
x_world = pose[:3, :3] @ x_camera + pose[:3, 3:4]

to convert a 3D camera space point x_camera into a world space point x_world.

These matrices must be stored in the OpenGL coordinate system convention for camera rotation: x-axis to the right, y-axis upward, and z-axis backward along the camera's focal axis.

The most common conventions are

  • [right, up, backwards]: OpenGL, NeRF, most graphics code.
  • [right, down, forwards]: OpenCV, COLMAP, most computer vision code.

Fortunately switching from OpenCV/COLMAP to NeRF is simple: you just need to right-multiply the OpenCV pose matrices by np.diag([1, -1, -1, 1]), which will flip the sign of the y-axis (from down to up) and z-axis (from forwards to backwards):

camtoworlds_opengl = camtoworlds_opencv @ np.diag([1, -1, -1, 1])

You may also want to scale your camera pose translations such that they all lie within the [-1, 1]^3 cube for best performance with the default mipnerf360 config files.

We provide a useful helper function camera_utils.transform_poses_pca that computes a translation/rotation/scaling transform for the input poses that aligns the world space x-y plane with the ground (based on PCA) and scales the scene so that all input pose positions lie within [-1, 1]^3. (This function is applied by default when loading mip-NeRF 360 scenes with the LLFF data loader.) For a scene where this transformation has been applied, camera_utils.generate_ellipse_path can be used to generate a nice elliptical camera path for rendering videos.

Intrinsic camera poses

pixtocams= [N, 3, 4] numpy array of inverse intrinsic matrices, OR [3, 4] numpy array of a single shared inverse intrinsic matrix. These should be in OpenCV format, e.g.

camtopix = np.array([
  [focal,     0,  width/2],
  [    0, focal, height/2],
  [    0,     0,        1],
])
pixtocam = np.linalg.inv(camtopix)

Given a focal length and image size (and assuming a centered principal point, this matrix can be created using camera_utils.get_pixtocam.

Alternatively, it can be created by using camera_utils.intrinsic_matrix and inverting the resulting matrix.

Resolution

height = int, height of images.

width = int, width of images.

Distortion parameters (optional)

distortion_params = dict, camera lens distortion model parameters. This dictionary must map from strings -> floats, and the allowed keys are ['k1', 'k2', 'k3', 'k4', 'p1', 'p2'] (up to four radial coefficients and up to two tangential coefficients). By default, this is set to the empty dictionary {}, in which case undistortion is not run.

Details of the inner workings of Dataset

The public interface mimics the behavior of a standard machine learning pipeline dataset provider that can provide infinite batches of data to the training/testing pipelines without exposing any details of how the batches are loaded/created or how this is parallelized. Therefore, the initializer runs all setup, including data loading from disk using _load_renderings, and begins the thread using its parent start() method. After the initializer returns, the caller can request batches of data straight away.

The internal self._queue is initialized as queue.Queue(3), so the infinite loop in run() will block on the call self._queue.put(self._next_fn()) once there are 3 elements. The main thread training job runs in a loop that pops 1 element at a time off the front of the queue. The Dataset thread's run() loop will populate the queue with 3 elements, then wait until a batch has been removed and push one more onto the end.

This repeats indefinitely until the main thread's training loop completes (typically hundreds of thousands of iterations), then the main thread will exit and the Dataset thread will automatically be killed since it is a daemon.

Citation

If you use this software package, please cite whichever constituent paper(s) you build upon, or feel free to cite this entire codebase as:

@misc{multinerf2022,
      title={{MultiNeRF}: {A} {Code} {Release} for {Mip-NeRF} 360, {Ref-NeRF}, and {RawNeRF}},
      author={Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman and Ricardo Martin-Brualla and Jonathan T. Barron},
      year={2022},
      url={https://github.com/google-research/multinerf},
}

multinerf's People

Contributors

bmild avatar jonbarron avatar nmarticorena avatar onpix avatar sarasra avatar sxyu avatar viktor286 avatar yzslab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multinerf's Issues

Using the Ref-NeRF codebase to reproduce the table in the paper.

Firstly, thanks for the authors' impressive work!
I am currently trying to reproduce Table S6-S9 in the original Ref-NeRF paper(the shiny blender dataset). I am currently using the shinyblender config in the repository.
However, when testing with the "ball' dataset, the results is not as good as the paper shows(precisely, the paper show a PSNR of ~47, but in my case, after 250000 iterations, the PSNR barely reaches 39 in training, and the images rendered look far from the GT). Really confused by the results.
1
2
3
4

About training time

Hi there, I downloaded the code and installed the dependecy, I use the 360_v2/room dataset for training, and I found the training time is extremely long.
I train use the original train_360 shell script in the code, only changed the data_dir and result dir.
I recall the paper says the Mip-NeRF 360 should 2x slow compared to the original NeRF or Mip-NeRF. and I trained the mip-nerf_pl which is the pytorch lighting version, and it took almost 20-30 hrs on single 3090 GPU 24GB, but this code of Mip-NeRF 360, I trained on 4 3090 GPU 24GB, it takes 3mins-4mins per 100 steps, for 250k steps, it should be over 120hrs, and that's around a week time, I wonder why is the training speed got that? Jax version I beileve should be faster than pytorch version.

How to use multi-GPU training

Hi, I'm trying to train MipNeRF360 with 4 GPUs. Should I remain the batch_size untouched or multiply 4 to it? Is there any other changes I should do to support multi-GPU training?

Thanks!

Using GPU instead of TPU

When I run train.py, I get these warnings:

I0808 20:59:00.842178 139857030612800 xla_bridge.py:328] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: 
I0808 20:59:00.963761 139857030612800 xla_bridge.py:328] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: CUDA Host Interpreter
I0808 20:59:00.964296 139857030612800 xla_bridge.py:328] Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'

Is there a config setting I need to change somewhere to tell jax that I want to use a gpu, and not a tpu? Or will it fallback to gpu automatically, and I can disregard these warnings?

hello,I have two questions

1、I downloaded the dataset from the paper home page, and extracted it as shown below.
But there doesn't seem to be a training .sh for this dataset in scripts folder.
image

2、The trainning doesn't seem to use GPU. The following information is displayed:
image
Is it a Jax version issue?

All clues welcome!

Hi - trying to run either the 360 or raw scripts (with the paths suitably edited) leaves me in an endless loop as below. I take the JAX warnings not to be errors (I get the same running with CPU or GPU JAX) but the code then drops into an endless cycle (xxx is edited in to replace a real path on my machine) - I have no guess as to how to fix this! What might I be doing wrong please?

bash scripts/eval_raw_mjp.sh

I0915 15:39:12.619821 140644963370176 xla_bridge.py:350] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
I0915 15:39:12.695000 140644963370176 xla_bridge.py:350] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: Host Interpreter CUDA
I0915 15:39:12.695568 140644963370176 xla_bridge.py:350] Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'
I0915 15:40:46.779565 140644963370176 checkpoints.py:466] Found no checkpoint files in xxx/rawnerf/nerf_results/raw/candle with prefix checkpoint_
Checkpoint step 0 <= last step 0, sleeping.
I0915 15:40:56.790609 140644963370176 checkpoints.py:466] Found no checkpoint files in xxx/rawnerf/nerf_results/raw/candle with prefix checkpoint_
Checkpoint step 0 <= last step 0, sleeping.
...and so on ad infinitum....

Recommended hardware?

As expected, training MultiNeRF on my laptop (specs below) takes ages. After an hour, I still hadn't made it past the first checkpoint with the default configuration. I'd like to train on more capable hardware, like some beefy AWS instance. What do you recommend?

macOS Monterey
MacBook Pro 2018
Processor: 2.7 GHz Quad-Core Intel Core i7
Memory: 16 GB 2133 MHz LPDDR3
Graphics: Intel Iris Plus Graphics 655 1536 MB

Key error in at fn_inv = inv_mapping[fn] in construct_ray_warps function

I'm trying to run this codebase with the 360_v2 dataset provided. I am using one of the scenes from this data, and trying to train the model using train_360.sh script.
What might be the issue here?

A slightly more detailed error log:
File "/home/ubuntu/work/video_to_3d/multinerf/train.py", line 67, in main setup = train_utils.setup_model(config, key, dataset=dataset) File "/home/ubuntu/work/video_to_3d/multinerf/internal/train_utils.py", line 403, in setup_model model, variables = models.construct_model(rng, dummy_rays, config) File "/home/ubuntu/work/video_to_3d/multinerf/internal/models.py", line 331, in construct_model init_variables = model.init( File "/home/ubuntu/anaconda3/envs/multinerf/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/home/ubuntu/work/video_to_3d/multinerf/internal/models.py", line 124, in __call__ _, s_to_t = coord.construct_ray_warps(self.raydist_fn, rays.near, rays.far) File "/home/ubuntu/work/video_to_3d/multinerf/internal/coord.py", line 94, in construct_ray_warps fn_inv = inv_mapping[fn] KeyError: <function reciprocal at 0x7fd224ed4550>

circle blur on my own datasets.

Hi there, I test the code on the datasets from yours, it shows fine, but when I tried to test on my own data, it seems that there is always a circle blur on the top of the output images.

Data was shot by iPhone, extract images by ffmepg, and use the scripts/local_colmap_and_resize.sh to extract the pose.

1661735671460

RefNeRF Real Dataset

Hi,

Thank you all for the great work and the release of the code. I want to try the RefNeRF in real datasets, starting by reproducing the results in the scenes from the paper. I noticed that, first, the only config file for RefNeRF is for the shiny blender dataset, and second that the scene files for the real scenes from the RefNeRF website don't match the structure of the other scenes usually created for NeRF (transforms.json etc are missing).

Is there an easy way to run the real scenes and do I need special config parameters to reproduce the results of the paper?

Best,
Georgios Kopanas

How do you edit diffuse color and roughness in Ref-NeRF?

To whom it may concern,

Hello! I'm very interested in the Scene Editing section in Ref-NeRF. It said people can manipulate the k values used in the IDE to edit the objects' surfaces. I checked the render_image() function and found out that render_eval_pfn, state.params and ray placeholder are directly fed into the functools. I don't know how to change any of the trained weights here in order to edit the diffuse color or roughness. Would you please explain more about this? Thank you!

generate_tables.ipynb is broken?

Thank you for your great work !

I noticed your scripts/generate_tables.ipynb doesn't render correctly on the .ipynb viewer on github (like the figure I attached).
There seems to be something wrong with the code. Sorry if I misunderstood that. thx

image

To reproduce the video shown in demo

Hi, Thanks to author for wonderful work.
It will be really helpful if sequence of steps could be mentioned to reproduce the video (without retrain), shown for demos.

Thanks.

TensorFlow 2.10 causes trouble!

Hi - many thanks for releasing this code! Tensorflow 2.10 is now (since about September 1st 2022) the default version for a pip install but running in a CUDA 11.2.2 CUDNN 8.1 environment you'll see error messages characterised by "Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered". The solution in the context of this code is to downgrade to TF 2.9.2 - I suggest you change the requirements.txt file to reflect the specific version.

DNN library is not found

image
Python version:3.9.12
CUDA version:11.3.109
CUDNN version:8.2.4
jax version: 0.3.15
jaxlib version: 0.3.15+cuda11.cudann82
GPU: RTX 3070

[MipNeRF 360] Comparison to MipNeRF and NeRF

Hi Thanks a lot for this project!

I have a dumb question on the comparisons to MipNeRF and NeRF.

image

Does this sentence mean that the inputs of (Mip)NeRF are only the positional encoding, without the original xyz coordinates?

Thanks in advance!

Question about GPU memory usage

Hello! Thank you for releasing this repo.
I am trying to run it on the LLFF trex dataset, with the downsample factors tried at 4 and 8, but I am getting GPU OOM issues.
I am not a JAX user but I tried XLA_PYTHON_CLIENT_ALLOCATOR=platform and XLA_PYTHON_CLIENT_PREALLOCATE=false and the OOM is still happening.
On a 15GB NVidia GPU, I reduced the network size parameters by a factor of 4 (sample counts and net_widths) and I am still getting OOM.

I understand in the paper you used TPU v2-32 which has 256GiB memory, but is this huge memory load intended(or am I doing something wrong)?
Also, how much TPU memory was actually used for the experiments?(Need a rough gauge of required memory, sort of)

how to use gpu during training

hi I'd like to know if it is possible to run the train on the GPU. I've launch the training and it's really slow, checking the GPU usage I've noticed that none GPU is used. There is a way to exploit the GPU?

NaN weights in the begining of training

Hi, thanks for your great work. I noticed that the weights would be NaN after the first sampling, however the training pipeline does not broken and the PSNR is growing. I'm wondering why this happened and how do you deal with NaN weights?
image

UnicodeDecodeError occured when training Tanks and Temples dataset processed by the NeRF++

Hi,
I am trying to train Tanks and Temples dataset by this command:

python train.py \
    --gin_configs=configs/tat.gin \
    --gin_bindings="Config.data_dir = '/mnt/x/dataset/nerfplusplus/tanks_and_temples/tat_intermediate_Playground'" \
    --gin_bindings="Config.checkpoint_dir = '/mnt/x/NeRF-Data/multinerf_results/checkpoints/tanks_and_temples/tat_intermediate_Playground'" \
    --gin_bindings="Config.batch_size = 4096" \
    --logtostderr

But it reports UnicodeDecodeError:

~/src/multinerf$ bash scripts/train_tat.sh
I0805 14:18:43.132598 140224505852096 xla_bridge.py:328] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: 
I0805 14:18:43.320095 140224505852096 xla_bridge.py:328] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: Interpreter CUDA Host
I0805 14:18:43.320447 140224505852096 xla_bridge.py:328] Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'
/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py:515: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
  warnings.warn(
Traceback (most recent call last):
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zhensheng/src/multinerf/train.py", line 288, in <module>
    app.run(main)
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "/home/zhensheng/src/multinerf/train.py", line 55, in main
    dataset = datasets.load_dataset('train', config.data_dir, config)
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 52, in load_dataset
    return dataset_dict[config.dataset_loader](split, train_dir, config)
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 258, in __init__
    self._load_renderings(config)
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 710, in _load_renderings
    images = load_files('rgb', lambda f: np.array(Image.open(f))) / 255.
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 697, in load_files
    mats = np.array([load_fn(utils.open_file(f)) for f in files])
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 697, in <listcomp>
    mats = np.array([load_fn(utils.open_file(f)) for f in files])
  File "/home/zhensheng/src/multinerf/internal/datasets.py", line 710, in <lambda>
    images = load_files('rgb', lambda f: np.array(Image.open(f))) / 255.
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/site-packages/PIL/Image.py", line 3101, in open
    prefix = fp.read(16)
  File "/home/zhensheng/anaconda3/envs/multinerf/lib/python3.9/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

image.open.png

_buffer_decode.png

Do you know how to fix it?
Thanks.

Release of blender files

Hi.

Congrats on the really nice work! I have been running some experiments with Ref-NeRF and I was wondering if you plan to release the blend files for the ref_shiny dataset. I would like to modify the material properties in some of the models and see how Ref-NeRF behaves.

Thank you in advance.

How to train your data?

Sorry if this is in the documentation, I have a set of jpgs that I have and ran bash scripts/local_colmap_and_resize.sh my_dataset_dir but from here im stumped as to what to do next?

Also does this train 360 equirectangular images or video?

How to extract 3D model

Hello, I want to know how to extract the 3D model of the generated scene with UV mapped textures. Thank You

No GPU/TPU found

When I run all unit tests, I get the following error.
image

I am working with Quadro RTX 8000, cuda 11.0.

struct error: unpack requires a buffer of 4 bytes

Hi, thank you so much for sharing this amazing work to everyone!
I am currently testing if MultiNeRF is possible to run on Windows machine, and right now with this command:

python -m train --gin_configs=configs\360.gin --gin_bindings="Config.data_dir = '%DATA_DIR%'" --gin_bindings="Config.checkpoint_dir = '%DATA_DIR%\checkpoints'" --logtostderr

I would run into this error:

I0822 23:45:01.236814 21060 xla_bridge.py:160] Remote TPU is not linked into jax; skipping remote TPU.
I0822 23:45:01.236814 21060 xla_bridge.py:333] Unable to initialize backend 'tpu_driver': Could not initialize backend 'tpu_driver'
I0822 23:45:01.369816 21060 xla_bridge.py:333] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: CUDA Interpreter Host
I0822 23:45:01.374814 21060 xla_bridge.py:333] Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client'
C:\Users\user\anaconda3\envs\multinerf\lib\site-packages\jax\_src\lib\xla_bridge.py:506: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
  warnings.warn(
Traceback (most recent call last):
  File "C:\Users\user\anaconda3\envs\multinerf\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\user\anaconda3\envs\multinerf\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "F:\MultiNeRF\multinerf\train.py", line 288, in <module>
    app.run(main)
  File "C:\Users\user\anaconda3\envs\multinerf\lib\site-packages\absl\app.py", line 308, in run
    _run_main(main, args)
  File "C:\Users\user\anaconda3\envs\multinerf\lib\site-packages\absl\app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "F:\MultiNeRF\multinerf\train.py", line 55, in main
    dataset = datasets.load_dataset('train', config.data_dir, config)
  File "F:\MultiNeRF\multinerf\internal\datasets.py", line 52, in load_dataset
    return dataset_dict[config.dataset_loader](split, train_dir, config)
  File "F:\MultiNeRF\multinerf\internal\datasets.py", line 295, in __init__
    self._load_renderings(config)
  File "F:\MultiNeRF\multinerf\internal\datasets.py", line 584, in _load_renderings
    pose_data = NeRFSceneManager(colmap_dir).process()
  File "F:\MultiNeRF\multinerf\internal\datasets.py", line 77, in process
    self.load_cameras()
  File "F:\MultiNeRF\multinerf\internal/pycolmap/pycolmap\scene_manager.py", line 90, in load_cameras
    self._load_cameras_bin(input_file)
  File "F:\MultiNeRF\multinerf\internal/pycolmap/pycolmap\scene_manager.py", line 102, in _load_cameras_bin
    num_cameras = struct.unpack('L', f.read(8))[0]
struct.error: unpack requires a buffer of 4 bytes

Please let me know if you know any solutions for this error! Any help would be appreciated.

Currently my Jax is 0.13.4 and the Jaxlib is 0.13.4 too
Running on CUDA11.1 with python 3.9

Tests & training fail on Google TPU VM

TPU type: v3-8
TPU software version: tpu-vm-tf-2.9.1

Installed PIP packages / `pip freeze` output (click to expand)
absl-py==1.2.0
asttokens==2.0.8
astunparse==1.6.3
backcall==0.2.0
cachetools==5.2.0
certifi @ file:///opt/conda/conda-bld/certifi_1655968806487/work/certifi
charset-normalizer==2.1.1
chex==0.1.4
colorama==0.4.5
commonmark==0.9.1
cycler==0.11.0
decorator==5.1.1
dm-pix==0.3.3
dm-tree==0.1.7
etils==0.7.1
executing==1.0.0
flatbuffers==1.12
flax==0.6.0
fonttools==4.37.1
gast==0.4.0
gin-config==0.5.0
google-auth==2.11.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.48.1
h5py==3.7.0
idna==3.3
importlib-metadata==4.12.0
importlib-resources==5.9.0
ipython==8.5.0
jax==0.3.17
jaxlib==0.3.15
jedi==0.18.1
keras==2.9.0
Keras-Preprocessing==1.1.2
kiwisolver==1.4.4
libclang==14.0.6
Markdown==3.4.1
MarkupSafe==2.1.1
matplotlib==3.5.3
matplotlib-inline==0.1.6
mediapy==1.1.0
msgpack==1.0.4
numpy==1.23.3
oauthlib==3.2.1
opencv-python==4.6.0.66
opt-einsum==3.3.0
optax==0.1.3
packaging==21.3
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.2.0
prompt-toolkit==3.0.31
protobuf==3.19.4
ptyprocess==0.7.0
pure-eval==0.2.2
pyasn1==0.4.8
pyasn1-modules==0.2.8
Pygments==2.13.0
pyparsing==3.0.9
python-dateutil==2.8.2
PyYAML==6.0
rawpy==0.17.2
requests==2.28.1
requests-oauthlib==1.3.1
rich==11.2.0
rsa==4.9
scipy==1.9.1
six==1.16.0
stack-data==0.5.0
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.9.1
tensorflow-estimator==2.9.0
tensorflow-io-gcs-filesystem==0.27.0
termcolor==2.0.0
toolz==0.12.0
traitlets==5.3.0
typing_extensions==4.3.0
urllib3==1.26.12
wcwidth==0.2.5
Werkzeug==2.2.2
wrapt==1.14.1
zipp==3.8.1

Note the tensorflow version matches the TPU software version (2.9.1).

The test failures:

2022-09-11 22:45:00.888529: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 17: 14504 Aborted                 (core dumped) python -m unittest tests.camera_utils_test
2022-09-11 22:45:38.372593: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 18: 14981 Aborted                 (core dumped) python -m unittest tests.geopoly_test
2022-09-11 22:45:40.651496: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 19: 15432 Aborted                 (core dumped) python -m unittest tests.stepfun_test
2022-09-11 22:45:42.753890: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 20: 15884 Aborted                 (core dumped) python -m unittest tests.coord_test
2022-09-11 22:45:44.940510: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 21: 16335 Aborted                 (core dumped) python -m unittest tests.image_test
2022-09-11 22:45:47.053111: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 22: 16788 Aborted                 (core dumped) python -m unittest tests.ref_utils_test
2022-09-11 22:45:49.244992: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 23: 17259 Aborted                 (core dumped) python -m unittest tests.utils_test
2022-09-11 22:45:51.564844: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 24: 17712 Aborted                 (core dumped) python -m unittest tests.datasets_test
2022-09-11 22:45:53.660296: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 25: 18165 Aborted                 (core dumped) python -m unittest tests.math_test
2022-09-11 22:45:55.757259: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
./scripts/run_all_unit_tests.sh: line 26: 18616 Aborted                 (core dumped) python -m unittest tests.render_test

Training errors:

~/multinerf$ python -m train \
>   --gin_configs=configs/360.gin \
>   --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
>   --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" \
>   --logtostderr
2022-09-11 23:06:43.482723: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/lib
WARNING:absl:GlobalAsyncCheckpointManager is not imported correctly. Checkpointing of GlobalDeviceArrays will not be available.To use the feature, install tensorstore.
2022-09-11 23:06:44.899501: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/cv2/../../lib64::/usr/local/lib
2022-09-11 23:06:44.899561: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
I0911 23:06:44.919915 139717675481088 xla_bridge.py:350] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker:
I0911 23:06:44.920102 139717675481088 xla_bridge.py:350] Unable to initialize backend 'cuda': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
I0911 23:06:44.920187 139717675481088 xla_bridge.py:350] Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
2022-09-11 23:06:44.974486: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:100] TpuEmbeddingEngine_CollateMemory not available in this library.
Fatal Python error: Aborted

Thread 0x00007f10558b2700 (most recent call first):
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/threading.py", line 316 in wait
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/threading.py", line 581 in wait
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/threading.py", line 1304 in run
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/threading.py", line 980 in _bootstrap_inner
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/threading.py", line 937 in _bootstrap

Current thread 0x00007f128e6a7800 (most recent call first):
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jaxlib/xla_client.py", line 110 in make_tpu_client
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py", line 195 in tpu_client_timer_callback
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py", line 382 in _init_backend
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py", line 331 in backends
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py", line 406 in _get_backend_uncached
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lib/xla_bridge.py", line 422 in get_backend
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/dispatch.py", line 428 in lower_xla_callable
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/profiler.py", line 313 in wrapper
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/dispatch.py", line 324 in _xla_callable_uncached
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/dispatch.py", line 195 in xla_primitive_callable
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/util.py", line 215 in cached
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/util.py", line 222 in wrapper
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/dispatch.py", line 111 in apply_primitive
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/core.py", line 686 in process_primitive
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/core.py", line 328 in bind_with_trace
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/core.py", line 325 in bind
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/lax/lax.py", line 579 in _convert_element_type
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/numpy/lax_numpy.py", line 1902 in array
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/numpy/lax_numpy.py", line 1921 in asarray
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/prng.py", line 552 in random_seed
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/prng.py", line 262 in seed_with_impl
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/jax/_src/random.py", line 128 in PRNGKey
  File "/home/palisand/multinerf/train.py", line 45 in main
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 254 in _run_main
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 308 in run
  File "/home/palisand/multinerf/train.py", line 288 in <module>
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/runpy.py", line 87 in _run_code
  File "/home/palisand/miniconda3/envs/multinerf/lib/python3.9/runpy.py", line 197 in _run_module_as_main
Aborted (core dumped)

Docker

Could you provide a Dockerfile that properly sets up the requirements?

How to train Ref-NeRF on real captured dataset correctly?

Sorry for opening an issue again. I am currently training Ref-NeRF on the released real captured dataset(more precisely, on 'gardenspheres' dataset). I discovered it is in LLFF format, so i modified the config blender-refnerf.gin, disables the normal metric calculation, and copied some parameters from llff_256.gin, the final config is as follows:

Config.dataset_loader = 'llff'
Config.batching = 'single_image'
Config.near = 0.
Config.far = 1.
Config.factor = 4
Config.forward_facing = True
Config.batch_size = 256
Config.eval_render_interval = 5
Config.render_chunk_size = 256
Config.compute_normal_metrics = False
Config.data_loss_type = 'mse'
Config.distortion_loss_mult = 0.0
Config.orientation_loss_mult = 0.1
Config.orientation_loss_target = 'normals_pred'
Config.predicted_normal_loss_mult = 3e-4
Config.orientation_coarse_loss_mult = 0.01
Config.predicted_normal_coarse_loss_mult = 3e-5
Config.interlevel_loss_mult = 0.0
Config.data_coarse_loss_mult = 0.1
Config.adam_eps = 1e-8

Model.num_levels = 2
Model.single_mlp = True
Model.num_prop_samples = 128  # This needs to be set despite single_mlp = True.
Model.num_nerf_samples = 128
Model.anneal_slope = 0.
Model.dilation_multiplier = 0.
Model.dilation_bias = 0.
Model.single_jitter = False
Model.resample_padding = 0.01

NerfMLP.net_depth = 8
NerfMLP.net_width = 256
NerfMLP.net_depth_viewdirs = 8
NerfMLP.basis_shape = 'octahedron'
NerfMLP.basis_subdivisions = 1
NerfMLP.disable_density_normals = False
NerfMLP.enable_pred_normals = True
NerfMLP.use_directional_enc = True
NerfMLP.use_reflections = True
NerfMLP.deg_view = 5
NerfMLP.enable_pred_roughness = True
NerfMLP.use_diffuse_color = True
NerfMLP.use_specular_tint = True
NerfMLP.use_n_dot_v = True
NerfMLP.bottleneck_width = 128
NerfMLP.density_bias = 0.5
NerfMLP.max_deg_point = 16

However, the outcome of the training is blurry:
image

Could you please correct my training config, or release the config used to train the real-captured dataset? Many thanks!

[environment]Couldn't pass the test in ./scripts/run_all_unit_tests.sh

Hi,
I am encountering this problem when I run ./scripts/run_all_unit_tests.sh after setting up environment as you illustrated:
111
222
image
image
my CUDA version is 11.3, with a CUDNN version 8.3.2, and I am using RTX 3090
the failed position are as follows:
image
I used the original numpy to run the test and there is not any problem running those test, so I am wondering whether it is the JAX issue?
I tried several different JAX and JAXlib version which still don't fix it
So, could you please help me in fixing this problem?

How to choose raydist_fn

Hi, I'm using my own dataset in which near=1e-4 and far=5. I noticed that when using jnp.reciprocal as Model.raydist_fn, almost all sample points are mapped to a small range around 0.
image

If I set near=2 and far=6, the mapping result seems much reasonable.
image

Since there are five mapping functions provided in construct_ray_warps, how can I choose the most appropriate one for my dataset?

Unit test failed after upgrade jax

Package Version


absl-py 1.2.0
asttokens 2.0.8
astunparse 1.6.3
backcall 0.2.0
cachetools 5.2.0
certifi 2022.6.15
charset-normalizer 2.1.1
chex 0.1.4
colorama 0.4.5
commonmark 0.9.1
cycler 0.11.0
decorator 5.1.1
dm-pix 0.3.3
dm-tree 0.1.7
etils 0.7.1
executing 1.0.0
flatbuffers 1.12
flax 0.6.0
fonttools 4.37.1
gast 0.4.0
gin-config 0.5.0
google-auth 2.11.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.48.1
h5py 3.7.0
idna 3.3
importlib-metadata 4.12.0
importlib-resources 5.9.0
ipython 8.4.0
jax 0.3.17
jaxlib 0.3.15+cuda11.cudnn82
jedi 0.18.1
keras 2.9.0
Keras-Preprocessing 1.1.2
kiwisolver 1.4.4
libclang 14.0.6
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.5.3
matplotlib-inline 0.1.6
mediapy 1.1.0
msgpack 1.0.4
numpy 1.23.2
oauthlib 3.2.0
opencv-python 4.6.0.66
opt-einsum 3.3.0
optax 0.1.3
packaging 21.3
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.2.0
pip 22.2.2
prompt-toolkit 3.0.31
protobuf 3.19.4
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.4.8
pyasn1-modules 0.2.8
Pygments 2.13.0
pyparsing 3.0.9
python-dateutil 2.8.2
PyYAML 6.0
rawpy 0.17.2
requests 2.28.1
requests-oauthlib 1.3.1
rich 11.2.0
rsa 4.9
scipy 1.9.1
setuptools 63.4.1
six 1.16.0
stack-data 0.5.0
tensorboard 2.9.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.9.2
tensorflow-estimator 2.9.0
tensorflow-io-gcs-filesystem 0.26.0
termcolor 1.1.0
toolz 0.12.0
traitlets 5.3.0
typing_extensions 4.3.0
urllib3 1.26.12
wcwidth 0.2.5
Werkzeug 2.2.2
wheel 0.37.1
wrapt 1.14.1
zipp 3.8.1

Ran 1 test in 2.614s

OK
F...

FAIL: test_compute_sq_dist_reference (tests.geopoly_test.GeopolyTest)
tests.geopoly_test.GeopolyTest.test_compute_sq_dist_reference
Test against a simple reimplementation of compute_sq_dist.

Traceback (most recent call last):
File "/home/comp/20481535/multinerf/tests/geopoly_test.py", line 53, in test_compute_sq_dist_reference
np.testing.assert_allclose(sq_dist, sq_dist_ref, atol=1e-5, rtol=1e-5)
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/numpy/testing/_private/utils.py", line 1527, in assert_allclose
assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-05, atol=1e-05

Mismatched elements: 8965 / 10000 (89.7%)
Max absolute difference: 0.01095963
Max relative difference: 0.00133467
x: array([[25.836046, 25.031933, 11.741873, ..., 24.258749, 27.689455,
16.63743 ],
[ 9.823105, 12.732669, 13.659373, ..., 33.38259 , 29.974373,...
y: array([[25.83547 , 25.03573 , 11.741478, ..., 24.258633, 27.685478,
16.636293],
[ 9.822346, 12.731433, 13.661174, ..., 33.374432, 29.975821,...


Ran 4 tests in 14.529s

FAILED (failures=1)
...............................Mean Error = 0.08031821995973587, Tolerance = 0.1
.Mean Error = 0.08638736605644226, Tolerance = 0.1
.EEEE...................

ERROR: test_sample_intervals_unbiased_deterministic_bounded (tests.stepfun_test.StepFunTest)
tests.stepfun_test.StepFunTest.test_sample_intervals_unbiased_deterministic_bounded
test_sample_intervals_unbiased_deterministic_bounded(False, True)

Traceback (most recent call last):
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/absl/testing/parameterized.py", line 314, in bound_param_test
return test_method(self, *testcase_params)
File "/home/comp/20481535/multinerf/tests/stepfun_test.py", line 551, in test_sample_intervals_unbiased
ts = t[None].tile([n, 1])
AttributeError: 'DeviceArray' object has no attribute 'tile'

======================================================================
ERROR: test_sample_intervals_unbiased_deterministic_unbounded (tests.stepfun_test.StepFunTest)
tests.stepfun_test.StepFunTest.test_sample_intervals_unbiased_deterministic_unbounded
test_sample_intervals_unbiased_deterministic_unbounded(False, False)

Traceback (most recent call last):
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/absl/testing/parameterized.py", line 314, in bound_param_test
return test_method(self, *testcase_params)
File "/home/comp/20481535/multinerf/tests/stepfun_test.py", line 551, in test_sample_intervals_unbiased
ts = t[None].tile([n, 1])
AttributeError: 'DeviceArray' object has no attribute 'tile'

======================================================================
ERROR: test_sample_intervals_unbiased_random_bounded (tests.stepfun_test.StepFunTest)
tests.stepfun_test.StepFunTest.test_sample_intervals_unbiased_random_bounded
test_sample_intervals_unbiased_random_bounded(True, True)

Traceback (most recent call last):
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/absl/testing/parameterized.py", line 314, in bound_param_test
return test_method(self, *testcase_params)
File "/home/comp/20481535/multinerf/tests/stepfun_test.py", line 551, in test_sample_intervals_unbiased
ts = t[None].tile([n, 1])
AttributeError: 'DeviceArray' object has no attribute 'tile'

======================================================================
ERROR: test_sample_intervals_unbiased_random_unbounded (tests.stepfun_test.StepFunTest)
tests.stepfun_test.StepFunTest.test_sample_intervals_unbiased_random_unbounded
test_sample_intervals_unbiased_random_unbounded(True, False)

Traceback (most recent call last):
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/absl/testing/parameterized.py", line 314, in bound_param_test
return test_method(self, *testcase_params)
File "/home/comp/20481535/multinerf/tests/stepfun_test.py", line 551, in test_sample_intervals_unbiased
ts = t[None].tile([n, 1])
AttributeError: 'DeviceArray' object has no attribute 'tile'


Ran 56 tests in 104.593s

FAILED (errors=4)
............PE of degree 5 has a maximum error of 2.5369226932525635e-06
.PE of degree 10 has a maximum error of 6.4849853515625e-05
.PE of degree 15 has a maximum error of 0.002378210425376892
.PE of degree 20 has a maximum error of 0.11622805148363113
.PE of degree 25 has a maximum error of 1.999955415725708
.PE of degree 30 has a maximum error of 1.9999704360961914
....

Ran 21 tests in 30.246s

OK
......

Ran 6 tests in 6.484s

OK
..

Ran 2 tests in 5.741s

OK
.

Ran 1 test in 0.636s

OK
.

Ran 1 test in 1.825s

OK
./scripts/run_all_unit_tests.sh: line 24: 418104 Aborted (core dumped) python -m unittest tests.datasets_test
.......

Ran 7 tests in 14.740s

OK
.........................................F

FAIL: test_rotated_conic_frustums (tests.render_test.RenderTest)
tests.render_test.RenderTest.test_rotated_conic_frustums

Traceback (most recent call last):
File "/home/comp/20481535/multinerf/tests/render_test.py", line 405, in test_rotated_conic_frustums
np.testing.assert_allclose(rot_mean, gt_rot_mean, atol=1E-5, rtol=1E-5)
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/numpy/testing/_private/utils.py", line 1527, in assert_allclose
assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
File "/home/comp/20481535/enter/envs/multinerf/lib/python3.9/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-05, atol=1e-05

Mismatched elements: 3 / 3 (100%)
Max absolute difference: 9.743869e-05
Max relative difference: 0.00122184
x: array([[-0.07965 , 0.632656, -0.33236 ]], dtype=float32)
y: array([[-0.079748, 0.632627, -0.332315]], dtype=float32)


Ran 42 tests in 40.311s

FAILED (failures=1)

any colab

hi possible to get colab for check test thank's

How to create the render_path_file

Thanks for your work. I noticed that when rendering video, we can make own camera track through render_path_file. How to generate a render_path_file file?Is there any visualization tool?Thanks in advance

dataset problem

hi,i meet some problem when run this :
python -m train --gin_configs=configs/360.gin --gin_bindings="Config.data_dir = 'my_dataset_dir'" --gin_bindings="Config.checkpoint_dir = 'my_dataset_dir/checkpoints'" --logtostderr
16-1
how can i do next ,please give me some advice ,thanks!!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.