Giter Club home page Giter Club logo

f2-nerf's Introduction

F2-NeRF

This is the repo for the implementation of F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories.

Install

The development of this project is primarily based on LibTorch.

Step 1. Install dependencies

For Debian based Linux distributions:

sudo apt install zlib1g-dev

For Arch based Linux distributions:

sudo pacman -S zlib

Step 2. Clone this repository

git clone --recursive https://github.com/Totoro97/f2-nerf.git
cd f2-nerf

Step 3. Download pre-compiled LibTorch

We take torch-1.13.1+cu117 for example.

cd External
wget https://download.pytorch.org/libtorch/cu117/libtorch-cxx11-abi-shared-with-deps-1.13.1%2Bcu117.zip
unzip ./libtorch-cxx11-abi-shared-with-deps-1.13.1%2Bcu117.zip

Step 4. Compile

The lowest g++ version I have tested is 7.5.0.

cd ..
cmake . -B build
cmake --build build --target main --config RelWithDebInfo -j

Run

Training

Here is an example command to train F2-NeRF:

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=train +work_dir=$(pwd)

Render test images

Simply run:

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=test is_continue=true +work_dir=$(pwd)

Render path

We provide a script to generate render path (by interpolating the input camera poses). For example, for the fox data, run:

python scripts/inter_poses.py --data_dir ./data/example/ngp_fox --key_poses 5,10,15,20,25,30,35,40,45,49 --n_out_poses 200

The file poses_render.npy in the data directory would be generated. Then run

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=render_path is_continue=true +work_dir=$(pwd)

The synthesized images can be found in ./exp/ngp_fox/test/novel_images.

Train F2-NeRF on your custom data

Make sure the images are at ./data/<your-dataset-name>/<your-case-name>/images

  1. Run COLMAP SfM:
bash scripts/local_colmap_and_resize.sh ./data/<your-dataset-name>/<your-case-name>

or run hloc if COLMAP failed. (Make sure hloc has been installed)

bash scripts/local_hloc_and_resize.sh ./data/<your-dataset-name>/<your-case-name>
  1. Generate cameras file:
python scripts/colmap2poses.py --data_dir ./data/<your-dataset-name>/<your-case-name>
  1. Run F2-NeRF using the similar command as in the example data:
python scripts/run.py --config-name=wanjinyou \
dataset_name=<your-dataset-name> case_name=<your-case-name> mode=train \
+work_dir=$(pwd)

Train F2-NeRF on LLFF/NeRF-360-V2 dataset

We provide a script to convert the LLFF camera format to our camera format. For example:

python scripts/llff2poses.py --data_dir=xxx/nerf_llff_data/horns

TODO/Future work

  • Add anti-aliasing

Acknowledgment

Besides LibTorch, this project is also built upon the following awesome libraries:

Some of the code snippets are inspired from instant-ngp, torch-ngp and ngp-pl. The COLMAP processing scripts are from multinerf. The example data ngp_fox is from instant-ngp.

Citation

Cite as below if you find this repository is helpful to your project:

@article{wang2023f2nerf,
  title={F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories},
  author={Wang, Peng and Liu, Yuan and Chen, Zhaoxi and Liu, Lingjie and Liu, Ziwei and Komura, Taku and Theobalt, Christian and Wang, Wenping},
  journal={CVPR},
  year={2023}
}

f2-nerf's People

Contributors

hrspythonix avatar totoro97 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

f2-nerf's Issues

custom dataset of KITTI

I want to test on the KITTI dataset.
I have real data, how do I generate a file like the following:
./f2-nerf/data/example/ngp_fox/cams_meta.npy

How do I get dist_params and bounds ? thanks
Dataset.cpp:

cam_data = cam_data.reshape({n_images_, 27});
Tensor poses = cam_data.slice(1, 0, 12).reshape({-1, 3, 4}).contiguous();

Tensor intri = cam_data.slice(1, 12, 21).reshape({-1, 3, 3}).contiguous();
intri.index_put_({Slc(), Slc(0, 2), Slc(0, 3)}, intri.index({Slc(), Slc(0, 2), Slc(0, 3)}) / factor);

Tensor dist_params = cam_data.slice(1, 21, 25).reshape({-1, 4}).contiguous();   // [k1, k2, p1, p2]
Tensor bounds = cam_data.slice(1, 25, 27).reshape({-1, 2}).contiguous();

camera pose normalization

Hi,
thanks for publishing code for this great work!
I see that you normalize the camera poses in the init of dataset.
If I want to retrieve point cloud from depth maps from trained scenes is it possible to get it in the original pose before normalization ? so for example the point cloud will be to scale to the original calibration.

[F Renderer.cpp:206] Check failed: std::isfinite((colors).mean().item<float>())

use my custom data
python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=outdoor mode=train +work_dir=$(pwd)
Working directory is /cephfs/yuanweizhong/project/f2-nerf
register Hash3DAnchoredInfo
register TCNNWPInfo
register volume render info
Aoligei!
[Dataset::Dataset] begin
Load render poses
[LoadImages] begin
[LoadImages] end in 17.7646 seconds
Number of train/test/val images: 1822/261/0
[Dataset::Dataset] end in 35.4331 seconds
[PersSampler::PersSampler] begin
[PersOctree::PersOctree] begin

[PersOctree::ConstructEdgePool] begin
edge_pool_.size() is value 9702
[PersOctree::ConstructEdgePool] end in 0.0749403 seconds
[PersOctree::PersOctree] end in 371.741 seconds
[PersSampler::PersSampler] end in 377.338 seconds
[Hash3DAnchored::Hash3DAnchored] begin
Warning: FullyFusedMLP is not supported for the selected architecture 70. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 70. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
[Hash3DAnchored::Hash3DAnchored] end in 13.7505 seconds
Warning: FullyFusedMLP is not supported for the selected architecture 70. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 70. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
n_nodes_before is value 3169
n_nodes_compacted is value 3117
tree_nodes_.size() is value 3117
[F Renderer.cpp:206] Check failed: std::isfinite((colors).mean().item())
Aborted (core dumped)

cmake --build build --target main --config RelWithDebInfo -j

CUDA=11.2
libtorch=1.9

f2-nerf/src/Dataset/Dataset.cpp:182:45: error: too many arguments to function ‘std::vectorat::Tensor at::meshgrid(at::TensorList)’ 182 | auto ij = torch::meshgrid({ ii, jj }, "ij");
f2-nerf/src/Dataset/Dataset.cpp:203:45: error: too many arguments to function ‘std::vectorat::Tensor at::meshgrid(at::TensorList)’ 203 | auto ij = torch::meshgrid({ ii, jj }, "ij");

f2-nerf/External/libtorch/include/ATen/Functions.h:1317:35: note: declared here
1317 | TORCH_API std::vectorat::Tensor meshgrid(at::TensorList tensors);
| ^~~~~~~~
make[3]: *** [CMakeFiles/main.dir/build.make:147: CMakeFiles/main.dir/src/Dataset/Dataset.cpp.o] Error 1
make[2]: *** [CMakeFiles/Makefile2:187: CMakeFiles/main.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:194: CMakeFiles/main.dir/rule] Error 2
make: *** [Makefile:125: main] Error 2

How to generate cams_meta.npy?

If I want to train on my own dataset with known internal and external parameters of the camera, how can I generate the corresponding cams_meta.npy?
Can you describe the meaning of each parameter in the file of cams_meta.npy?
Thanks!!

Render to any resolution

when training, i used config:
dataset:
factor: 2
It will be sampled for training at the (original resolution) / 2 of the dataset

but when test,i used same config:
dataset:
factor: 1
will wrong

how should I solve it?

Error running training

When following the instructions, I get:

$ python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=train +work_dir=$(pwd)
Working directory is ...
register Hash3DAnchoredInfo
register TCNNWPInfo
register volume render info
Aoligei!
[Dataset::Dataset] begin
Load render poses
[LoadImages] begin
[LoadImages] end in 2.3389 seconds
Number of train/test/val images: 43/7/0
[Dataset::Dataset] end in 6.70828 seconds
[PersSampler::PersSampler] begin
[PersOctree::PersOctree] begin
[PersOctree::ConstructEdgePool] begin
edge_pool_.size() is value 1131
[PersOctree::ConstructEdgePool] end in 0.00132046 seconds
[PersOctree::PersOctree] end in 12.9086 seconds
[PersSampler::PersSampler] end in 12.934 seconds
[Hash3DAnchored::Hash3DAnchored] begin
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
[Hash3DAnchored::Hash3DAnchored] end in 1.74168 seconds
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
[F TCNNWP.cpp:130] Check failed: (params.scalar_type()) == (torch_type(tcnnwp->module_->param_precision())) 
Aborted (core dumped)

(on Linux, with an NVIDIA 1080Ti)

CMake build error

f2-nerf/src/Field/Hash3DAnchored.cu(151): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (__half2 *, __half2)

Failed to reconstruct aerial photography scene

Hi,
I have some photos of a building taken by a drone, and can be reconstructed successfully by instant-ngp:

gym-1 gym-2

I tried to reconstruct it by F2-NeRF, but found the result full of artifacts (both free.yaml and wanjinyou.yaml):

RGB DEPTH OCT_DEPTH
color_20000_000 depth_20000_000 oct_depth_20000_000
color_20000_040 depth_20000_040 oct_depth_20000_040

Is there any hyperparameters need to be changed for scenes like this?
Thanks.

Bug `--config-name=wanjiyou`

python scripts/run.py --config-name=wanjiyou dataset_name=example case_name=ngp_fox mode=render_path is_continue=true

it should be --config-name=wanjinyou

Python bindings?

Are you planning to create Python bindings for the Anchored Hash Grid and Pers Sampler?

Thanks.

failed to create sparse model when use custom data

$ bash scripts/local_colmap_and_resize.sh ./data/mydata/loop

Loading database
==============================================================================

Loading cameras... 1 in 0.000s
Loading matches... 355 in 0.016s
Loading images... 76 in 0.042s (connected 74)
Building correspondence graph... in 0.012s (ignored 0)

Elapsed time: 0.001 [minutes]


==============================================================================
Finding good initial image pair
==============================================================================

  => No good initial image pair found.
  => Relaxing the initialization constraints.

==============================================================================
Finding good initial image pair
==============================================================================

  => No good initial image pair found.
  => Relaxing the initialization constraints.

==============================================================================
Finding good initial image pair
==============================================================================

  => No good initial image pair found.
  => Relaxing the initialization constraints.

==============================================================================
Finding good initial image pair
==============================================================================


==============================================================================
Initializing with image pair #3 and #12
==============================================================================

...
...
==============================================================================
Retriangulation
==============================================================================

  => Completed observations: 0
  => Merged observations: 0
  => Retriangulated observations: 0

==============================================================================
Global bundle adjustment
==============================================================================

iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  7.545661e+01    0.00e+00    6.63e-02   0.00e+00   0.00e+00  1.00e+04        0    2.36e-04    4.16e-04


Bundle adjustment report
------------------------
    Residuals : 338
   Parameters : 221
   Iterations : 1
         Time : 0.000432679 [s]
 Initial cost : 0.472487 [px]
   Final cost : 0.472487 [px]
  Termination : Convergence

  => Completed observations: 0
  => Merged observations: 0
  => Filtered observations: 0
  => Changed observations: 0.000000
  => Filtered images: 0

==============================================================================
Registering image #44 (4)
==============================================================================

  => Image sees 52 / 292 points
  => Could not register, trying another image.

==============================================================================
Registering image #46 (4)
==============================================================================

  => Image sees 39 / 164 points
  => Could not register, trying another image.

==============================================================================
Registering image #50 (4)
==============================================================================

  => Image sees 34 / 313 points
  => Could not register, trying another image.

==============================================================================
Finding good initial image pair
==============================================================================


==============================================================================
Initializing with image pair #23 and #22
==============================================================================


==============================================================================
Global bundle adjustment

...

Bundle adjustment report
------------------------
    Residuals : 108
   Parameters : 92
   Iterations : 101
         Time : 0.0117295 [s]
 Initial cost : 2.95252 [px]
   Final cost : 0.329608 [px]
  Termination : No convergence

  => Filtered observations: 0
  => Filtered images: 0

==============================================================================
Finding good initial image pair
==============================================================================


==============================================================================
Initializing with image pair #71 and #73
==============================================================================


==============================================================================
Global bundle adjustment
==============================================================================

...


Bundle adjustment report
------------------------
    Residuals : 108
   Parameters : 92
   Iterations : 101
         Time : 0.0101432 [s]
 Initial cost : 38.3953 [px]
   Final cost : 2.33497 [px]
  Termination : No convergence

  => Filtered observations: 16
  => Filtered images: 0

==============================================================================
Finding good initial image pair
==============================================================================

  => No good initial image pair found.

Elapsed time: 0.148 [minutes]
ERROR: failed to create sparse model

==============================================================================
Reading reconstruction
==============================================================================

F0517 02:01:50.685370 86435 reconstruction.cc:743] cameras, images, points3D files do not exist at ./data/b1f13/loop/sparse/0
*** Check failure stack trace: ***
    @     0x7f8d506571c3  google::LogMessage::Fail()
    @     0x7f8d5065c25b  google::LogMessage::SendToLog()
    @     0x7f8d50656ebf  google::LogMessage::Flush()
    @     0x7f8d506576ef  google::LogMessageFatal::~LogMessageFatal()
    @     0x5563a4812727  colmap::Reconstruction::Read()
    @     0x5563a474824e  colmap::RunImageUndistorter()
    @     0x5563a4728102  main
    @     0x7f8d4ee14083  __libc_start_main
    @     0x5563a472c43e  _start
scripts/local_colmap_and_resize.sh: line 63: 86435 Aborted                 (core dumped) colmap image_undistorter --image_path "$DATASET_PATH"/images --input_path "$DATASET_PATH"/sparse/0 --output_path "$DATASET_PATH"/dense --output_type COLMAP
~/f2-nerf/data/mydata/loop/images_2 ~/f2-nerf
xargs: mogrify: No such file or directory
~/f2-nerf
~/f2-nerf/data/mydata/loop/images_4 ~/f2-nerf
xargs: mogrify: No such file or directory
~/f2-nerf
~/f2-nerf/data/mydata/loop/images_8 ~/f2-nerf
xargs: mogrify: No such file or directory
~/f2-nerf

Cmake Error

Dear author,
I got the following error when executing the cmake . -B build command. Do you have any solution. My cuda version is 11.6.
image

why update octree after first density calculation in render?

In the function of render in render module, after first density calculation(tcnn forward), pts_sampler_->UpdateOctNodes get run. I don't understand the function of UpdateOctNodes, specially set alpha and weight as input. Is it for accelerating the training process, if nothing in the sub grid , we can't need to do cal caution? And why MarkVistNode and invalid node inside it? thank u very much!
image
image

Unable to reconstruct dynamic objects?

When I train on the KITTI dataset again, the algorithm is unable to reconstruct dynamic objects, such as cars. What is the reason for this?
For instance: Cars in motion are automatically removed。

Compilation Error

Hi thank you for your great work, I got compilation error on ubuntu 22.04, it's work well on ubuntu 20.04
I got the following error, when running cmake --build build --target main --config RelWithDebInfo -j

eigen error fatal error: Core: No such file or directory

what`s the demo dataset?

Hello,

I would like to express my sincere appreciation for your amazing work.
I am curious as to which dataset these images have been sourced from.
If it happens to be a collection that you have personally compiled, might there be any plans for making these datasets publicly available in the future?
Thank you.
20230510205627
20230510205643

Run into nan when training on ScanNet images

I followed all data preprocessing required and then ran f2-nerf on a short snippet (i.e. 50 frames with a 10 frame interval) of a ScanNet sequence and run into nan. Could you look into this please? The images are attached for your reference. The processed data is unfortunately too large to upload.
images.zip

Script/Explanation for figure 4

Do you have any script that can visualize something like figure 4, or any detailed explanations?

I wrote something that follows your formula, but get totally different results.
I use a toy example where there are two 1D cameras, and I use F(x) = (C1(x), C2(x)) as warping function. The warping looks roughly correct for theta=30
截圖 2023-05-08 下午3 44 17
but totally different for theta=180
截圖 2023-05-08 下午3 41 01
Should I increase the number of cameras and really do PCA? I feel like it should work with only two cameras.

Is it possible to generate ply file

Hi thank you for your great work, I saw that the demo is generating sequence of images by interpolating image pose, I wonder if it is possible to generate the 3D object (maybe .ply file) for entire scene instead so we can open it using software like blender.

Training time

The code is feasible! May I ask how many iterations have been made for the training time mentioned in the author's paper (12 minutes)? For the dataset Fox, my training seems to be slow, and my environment is as follows:
RTX 6000
NVIDIA 510.108.03
CUDA 11.3
CUDNN 8.7.0
gcc 9.4.0

paper, niubility!!! engineering but ...

camera free, faster construction-train/render-infer, dynamic-object/scene, memory-reduction, ....
all these nerf optimization points, fast and camera-free, F2 has done. (CVPR best paper, must!!!)

but, implmented in libtorch and cuda, needs half-precision, ...
too many compiling errors, too many running errors, ....

hi, our great authors, help me, I want to optimze these issues and submit. more widely spread. ...

Train F2-NeRF on your custom data

Generate cameras file:

When I run the command of :python scripts/colmap2poses.py --data_dir ./data//
The issues as follows:
Traceback (most recent call last):
File "/home/xxx/anaconda3/envs/F2-NERF/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/xxx/anaconda3/envs/F2-NERF/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/xxx/anaconda3/envs/F2-NERF/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/xxx/f2-nerf/scripts/colmap2poses.py", line 218, in main
dataset = Dataset(data_dir)
File "/home/xxx/f2-nerf/scripts/colmap2poses.py", line 137, in init
image_id_to_image_idx[name_to_ids[image_name]] = sorted_image_names.index(image_name)
IndexError: index 372 is out of bounds for axis 0 with size 29
python-BaseException

Error when training

Hi, and thanks for your excellent work.
When i train the model using python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=train +work_dir=$(pwd),

The error occurs:
Traceback (most recent call last): File "/home/xuxumiao/new_disk/MA/f2-nerf/scripts/run.py", line 38, in <module> @hydra.main(version_base=None, config_path='../confs', config_name='default') TypeError: main() got an unexpected keyword argument 'version_base'

Hopes for your answer ^ ^

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=train Error executing job with overrides: ['dataset_name=example', 'case_name=ngp_fox', 'mode=train'] Traceback (most recent call last): File "scripts/run.py", line 59, in main make_image_list(data_path, conf['dataset']['factor']) File "scripts/run.py", line 35, in make_image_list f = open(os.path.join(data_path, 'image_list.txt'), 'w') FileNotFoundError: [Errno 2] No such file or directory: '/cephfs/yuanweizhong/project/f2-nerf/outputs/2023-04-23/17-50-45/data/example/ngp_fox/image_list.txt'

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=train
Error executing job with overrides: ['dataset_name=example', 'case_name=ngp_fox', 'mode=train']
Traceback (most recent call last):
File "scripts/run.py", line 59, in main
make_image_list(data_path, conf['dataset']['factor'])
File "scripts/run.py", line 35, in make_image_list
f = open(os.path.join(data_path, 'image_list.txt'), 'w')
FileNotFoundError: [Errno 2] No such file or directory: '/cephfs/yuanweizhong/project/f2-nerf/outputs/2023-04-23/17-50-45/data/example/ngp_fox/image_list.txt'

free datasets

Are free datasets images taken with different internal parameters

Float blur in render images

Really appreciate for your important job and share of the project ! I use data captured by Dji sport camera to reconstruct the scene, which are high quality!

color_20000_208

But recently I move a fly camera around a unbounded scene, the rendered images have float blur!
color_20000_008

I checked the camera poses (as below). Looks like no problem.
屏幕截图 2023-05-18 182635

All the data of images and camera poses are available:
https://drive.google.com/file/d/1Zf36hdqAk1NI5u9ucJ7rQs2HBc3i1sIA/view?usp=sharing
Looking forward to paying attention to this issue

CMAKE error

Run cmake --build build --target main --config RelWithDebInfo -j, there have some errors:

xxx/f2-nerf/src/Field/TCNNWP.h(11): warning: overloaded virtual function "Field::Query" is only partially overridden in class "TCNNWP"
xxx/f2-nerf/src/Field/TCNNWP.h(40): error: qualified name is not allowed
xxx/f2-nerf/src/Field/TCNNWP.h(42): error: Function is not a template
xxx/f2-nerf/src/Field/TCNNWP.h(42): error: not a class or struct name
xxx/f2-nerf/src/Field/TCNNWP.h(44): error: identifier "variable_list" is undefined
xxx/f2-nerf/src/Field/TCNNWP.h(44): error: identifier "AutogradContext" is undefined

How to solve?

AssertionError: Can not find executable file

python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=render_path is_continue=true
Error executing job with overrides: ['dataset_name=example', 'case_name=ngp_fox', 'mode=render_path', 'is_continue=true']
Traceback (most recent call last):
File "scripts/run.py", line 74, in main
assert False, 'Can not find executable file'
AssertionError: Can not find executable file

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Several paper typos

Fixing these would help better formula understanding:

  1. In Definition 1. in the last sentence, the summation is from i to n_c not n
  2. In Problem 1. in the last sentence, the summation is from j to n_p not K
  3. What is Figure. 11? It is not described in text.

Thank you.

An cmake error was encountered during compilation

Thanks to the author for great work, but I encountered an error in the compilation process, Hope you can give us some information about modifying cmakelist

-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 11.6.124
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- No release type specified. Setting to 'Release'.
-- Obtained CUDA architectures automatically from installed GPUs
-- Targeting CUDA architectures: 70
CMake Warning at External/tiny-cuda-nn/CMakeLists.txt:204 (message):
Fully fused MLPs do not support GPU architectures of 70 or less. Falling
back to CUTLASS MLPs. Remove GPU architectures 70 and lower to allow
maximum performance

-- Module support is disabled.
-- Version: 9.1.1
-- Build type: Release
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "11.6")
-- Caffe2: CUDA detected: 11.6
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.6
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so
-- Found cuDNN: v8.6.0 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- /usr/local/cuda/lib64/libnvrtc.so shorthash is 4dd39364
-- Autodetected CUDA architecture(s): 7.0 7.0 7.0 7.0 7.0 7.0 7.0 7.0
-- Added CUDA NVCC flags for: -gencode;arch=compute_70,code=sm_70
-- Found Torch: /workspace/nerf/f2-nerf/External/libtorch/lib/libtorch.so
-- Configuring done
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
_CMAKE_CUDA_WHOLE_FLAG
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_CUDA_COMPILE_OBJECT
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.

What means: Run F2-NeRF!

Hey,

I just get confused.

Do you mean: python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=test is_continue=true ?

something like this?

Can you give more details about using custome-data :) ?

thanks

Got this error during training: terminate called after throwing an instance of 'c10::Error'

I encountered an issue while running the command

python scripts/run.py --config-name=wanjinyou \
dataset_name=example case_name=ngp_fox mode=train \
+work_dir=$(pwd)
terminate called after throwing an instance of 'c10::Error'
  what():  Trying to create tensor with negative dimension -2036932816: [-2036932816, 10249059, 32766]
Exception raised from check_size_nonnegative at ../aten/src/ATen/EmptyTensor.h:15 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f52f04e83cb in /home/ubuntu/f2-nerf/External/libtorch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f52f04e3d9e in /home/ubuntu/f2-nerf/External/libtorch/lib/libc10.so)
frame #2: at::detail::empty_generic(c10::ArrayRef<long>, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, c10::optional<c10::MemoryFormat>) + 0x8ec (0x7f52c03c3dcc in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #3: at::detail::empty_cpu(c10::ArrayRef<long>, c10::ScalarType, bool, c10::optional<c10::MemoryFormat>) + 0x55 (0x7f52c03c4e75 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #4: at::detail::empty_cpu(c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x49 (0x7f52c03c4ef9 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #5: at::native::empty_cpu(c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x33 (0x7f52c0ab1f63 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x249be31 (0x7f52c152ee31 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #7: at::_ops::empty_memory_format::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x1b7 (0x7f52c11ec107 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x2473f98 (0x7f52c1506f98 in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #9: at::_ops::empty_memory_format::call(c10::ArrayRef<c10::SymInt>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x15f (0x7f52c12274cf in /home/ubuntu/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0xff531 (0x55f69fb79531 in /home/ubuntu/f2-nerf/build/main)
frame #11: <unknown function> + 0xfef81 (0x55f69fb78f81 in /home/ubuntu/f2-nerf/build/main)
frame #12: <unknown function> + 0x97b8f (0x55f69fb11b8f in /home/ubuntu/f2-nerf/build/main)
frame #13: <unknown function> + 0x1357f1 (0x55f69fbaf7f1 in /home/ubuntu/f2-nerf/build/main)
frame #14: <unknown function> + 0x51c01 (0x55f69facbc01 in /home/ubuntu/f2-nerf/build/main)
frame #15: __libc_start_main + 0xf3 (0x7f529182f083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #16: <unknown function> + 0x5f5be (0x55f69fad95be in /home/ubuntu/f2-nerf/build/main)

Aborted (core dumped)

It looks like an incorrect CUDA setup or something...

My environment:
Ubuntu 20.04
A100 GPU
native CUDA 11.6 (the one nvcc link to, does this matter? I don't quite understand if libtorch is using its own built-in cuda)
CUDNN 8.0
torch.version '1.13.1+cu117'

on fopen: , file path: /data01/pot/exp/f2-nerf/exp/ngp_fox/test/checkpoints/latest/scalars.pt

I compiled successfully, but got an error when run python scripts/run.py --config-name=wanjinyou dataset_name=example case_name=ngp_fox mode=test is_continue=true +work_dir=$(pwd)
this is the middle process

Working directory is /data01/pot/exp/f2-nerf
register Hash3DAnchoredInfo
register TCNNWPInfo
register volume render info
Aoligei!
[Dataset::Dataset] begin
Load render poses
[LoadImages] begin
[LoadImages] end in 0.398194 seconds
Number of train/test/val images: 43/7/0
[Dataset::Dataset] end in 2.81254 seconds
[PersSampler::PersSampler] begin
[PersOctree::PersOctree] begin
[PersOctree::ConstructEdgePool] begin
edge_pool_.size() is value 1131
[PersOctree::ConstructEdgePool] end in 0.00137949 seconds
[PersOctree::PersOctree] end in 3.6452 seconds
[PersSampler::PersSampler] end in 3.66801 seconds
[Hash3DAnchored::Hash3DAnchored] begin
[Hash3DAnchored::Hash3DAnchored] end in 1.33869 seconds
terminate called after throwing an instance of 'c10::Error'
  what():  open file failed because of errno 2 on fopen: , file path: /data01/pot/exp/f2-nerf/exp/ngp_fox/test/checkpoints/latest/scalars.pt
Exception raised from RAIIFile at ../caffe2/serialize/file_adapter.cc:27 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x151f312613cb in /data01/pot/exp/f2-nerf/External/libtorch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x151f3125cd9e in /data01/pot/exp/f2-nerf/External/libtorch/lib/libc10.so)
frame #2: caffe2::serialize::FileAdapter::RAIIFile::RAIIFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x134 (0x151f034e5e94 in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #3: caffe2::serialize::FileAdapter::FileAdapter(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x41 (0x151f034e6511 in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #4: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7f (0x151f034e396f in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #5: torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x294 (0x151f04634bd4 in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #6: torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) + 0x94 (0x151f046354a4 in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #7: torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) + 0x76 (0x151f04635576 in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #8: torch::serialize::InputArchive::load_from(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) + 0x2a (0x151f04d522fa in /data01/pot/exp/f2-nerf/External/libtorch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x13c466 (0x55e405470466 in /data01/pot/exp/f2-nerf/build/main)
frame #10: <unknown function> + 0x13196d (0x55e40546596d in /data01/pot/exp/f2-nerf/build/main)
frame #11: <unknown function> + 0x135b21 (0x55e405469b21 in /data01/pot/exp/f2-nerf/build/main)
frame #12: <unknown function> + 0x51c01 (0x55e405385c01 in /data01/pot/exp/f2-nerf/build/main)
frame #13: __libc_start_main + 0xf3 (0x151ed1eb7083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #14: <unknown function> + 0x5f5be (0x55e4053935be in /data01/pot/exp/f2-nerf/build/main)

An cmake error during compilation

!cmake . -B build
!cmake --build build --target main --config RelWithDebInfo -j


Killed
make[3]: *** [CMakeFiles/main.dir/build.make:234: CMakeFiles/main.dir/src/Renderer/Renderer.cu.o] Error 137

Killed
make[3]: *** [CMakeFiles/main.dir/build.make:433: CMakeFiles/main.dir/src/Utils/CustomOps/FlexOps.cu.o] Error 137
Killed
make[3]: *** [CMakeFiles/main.dir/build.make:404: CMakeFiles/main.dir/src/Utils/CustomOps/CustomOps.cu.o] Error 137
Killed
make[3]: *** [CMakeFiles/main.dir/build.make:119: CMakeFiles/main.dir/src/Field/Hash3DAnchored.cu.o] Error 137
/content/f2-nerf/src/PtsSampler/PtsSamplerFactory.cpp: In function ‘std::unique_ptr<PtsSampler> ConstructPtsSampler(GlobalDataPool*)’:
/content/f2-nerf/src/PtsSampler/PtsSamplerFactory.cpp:8:80: warning: control reaches end of non-void function [-Wreturn-type]
    8 |   auto type = global_data_pool->config_["pts_sampler"]["type"].as<std::string>();
      |                                                                                ^
/content/f2-nerf/src/Shader/../Field/TCNNWP.h(11): warning #611-D: overloaded virtual function "Field::Query" is only partially overridden in class "TCNNWP"

make[2]: *** [CMakeFiles/Makefile2:187: CMakeFiles/main.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:194: CMakeFiles/main.dir/rule] Error 2
make: *** [Makefile:125: main] Error 2

test psnr is low

hi, thanks for your amazing work. when I trained a model using my custom dataset, the psnr of training dataset is 37, but the psnr of test/validation is 10. I have checked and not found any problems. Can you help me analyze the problem?

Cmake error

run cmake --build build --target main --config RelWithDebInfo -j got following error:

/usr/bin/ld: CMakeFiles/main.dir/src/Field/TCNNWP.cpp.o: in function `TCNNWP::TCNNWP(GlobalDataPool*, int, int, int, int)':
TCNNWP.cpp:(.text+0x37c3): undefined reference to `tcnn::cpp::create_network(unsigned int, unsigned int, nlohmann::basic_json<std::map, std::vector, std::string, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer, std::vector<unsigned char, std::allocator<unsigned char> > > const&)'
collect2: error: ld returned 1 exit status
make[3]: *** [CMakeFiles/main.dir/build.make:571: main] Error 1
make[2]: *** [CMakeFiles/Makefile2:188: CMakeFiles/main.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:195: CMakeFiles/main.dir/rule] Error 2
make: *** [Makefile:125: main] Error 2

Any idea?

Want tips for tuning parameters for larges scenes & anti-aliasing

Hello, I am wondering if I use f2-NeRF on a large scale dataset, which parameters should be tuned and how to(eg. increase, or reduce).

Also, in my result, I found thin stuff often missing, like image below, is it caused by aliasing?
image

In your future work part, I see you plan to add anti-aliasing, how do you plan to add it? like ZipNeRF?

Export mesh?

Hi, and thank you for sharing this code.
Is there any functionality to export a textured mesh after training?

Thanks!

get Nan!

Hi @Totoro97,

Thank you for your great work.
I tried to train your code with my own dataset.

python scripts/run.py --config-name=wanjinyou
dataset_name= case_name= mode=train
+work_dir=$(pwd)

But I got Nan!.
Can you help me? How to modify the configuration in wanjinyou to overcome the Nan?

Thank you

blocky blurred area in demo

In the demo video you gave, I found that there will be blocky blurred areas in some frames. This problem was even more pronounced when testing on my own dataset collected outdoors. What could be the reason for this? Thank you so much for your work!

image

On my datasets:

edc67185816db979af77755295cb6241.mp4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.