Giter Club home page Giter Club logo

monosdf's Introduction

MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction

Zehao Yu · Songyou Peng · Michael Niemeyer · Torsten Sattler · Andreas Geiger

NeurIPS 2022

Logo

We demonstrate that state-of-the-art depth and normal cues extracted from monocular images are complementary to reconstruction cues and hence significantly improve the performance of implicit surface reconstruction methods.


Update

MonoSDF is integrated to SDFStudio, where monocular depth and normal cues can be applied to UniSurf and NeuS. Please check it out.

Setup

Installation

Clone the repository and create an anaconda environment called monosdf using

git clone [email protected]:autonomousvision/monosdf.git
cd monosdf

conda create -y -n monosdf python=3.8
conda activate monosdf

conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
conda install cudatoolkit-dev=11.3 -c conda-forge

pip install -r requirements.txt

The hash encoder will be compiled on the fly when running the code.

Dataset

For downloading the preprocessed data, run the following script. The data for the DTU, Replica, Tanks and Temples is adapted from VolSDF, Nice-SLAM, and Vis-MVSNet, respectively.

bash scripts/download_dataset.sh

Training

Run the following command to train monosdf:

cd ./code
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf CONFIG  --scan_id SCAN_ID

where CONFIG is the config file in code/confs, and SCAN_ID is the id of the scene to reconstruct.

We provide example commands for training DTU, ScanNet, and Replica dataset as follows:

# DTU scan65
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/dtu_mlp_3views.conf  --scan_id 65

# ScanNet scan 1 (scene_0050_00)
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/scannet_mlp.conf  --scan_id 1

# Replica scan 1 (room0)
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/replica_mlp.conf  --scan_id 1

We created individual config file on Tanks and Temples dataset so you don't need to set the scan_id. Run training on the courtroom scene as:

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/tnt_mlp_1.conf

We also generated high resolution monocular cues on the courtroom scene and it's better to train with more gpus. First download the dataset

bash scripts/download_highres_TNT.sh

Then run training with 8 gpus:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8 python -m torch.distributed.launch --nproc_per_node 8 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/tnt_highres_grids_courtroom.conf

Of course, you can also train on all other scenes with multi-gpus.

Evaluations

DTU

First, download the ground truth DTU point clouds:

bash scripts/download_dtu_ground_truth.sh

then you can evaluate the quality of extracted meshes (take scan 65 for example):

python evaluate_single_scene.py --input_mesh scan65_mesh.ply --scan_id 65 --output_dir dtu_scan65

We also provide script for evaluating all DTU scenes:

python evaluate.py

Evaluation results will be saved to evaluation/DTU.csv by default, please check the script for more details.

Replica

Evaluate on one scene (take scan 1 room0 for example)

cd replica_eval
python evaluate_single_scene.py --input_mesh replica_scan1_mesh.ply --scan_id 1 --output_dir replica_scan1

We also provided script for evaluating all Replica scenes:

cd replica_eval
python evaluate.py

please check the script for more details.

ScanNet

cd scannet_eval
python evaluate.py

please check the script for more details.

Tanks and Temples

You need to submit the reconstruction results to the official evaluation server, please follow their guidance. We also provide an example of our submission here for reference.

Custom dataset

We provide an example of how to train monosdf on custom data (Apartment scene from nice-slam). First, download the dataset and run the script to subsample training images, normalize camera poses, and etc.

bash scripts/download_apartment.sh 
cd preprocess
python nice_slam_apartment_to_monosdf.py

Then, we can extract monocular depths and normals (please install omnidata model before running the command):

python extract_monocular_cues.py --task depth --img_path ../data/Apartment/scan1/image --output_path ../data/Apartment/scan1 --omnidata_path YOUR_OMNIDATA_PATH --pretrained_models PRETRAINED_MODELS
python extract_monocular_cues.py --task normal --img_path ../data/Apartment/scan1/image --output_path ../data/Apartment/scan1 --omnidata_path YOUR_OMNIDATA_PATH --pretrained_models PRETRAINED_MODELS

Finally, we train monosdf as

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/nice_slam_grids.conf

Pretrained Models

First download the pretrained models with

bash scripts/download_pretrained.sh

Then you can run inference with (DTU for example)

cd code
python evaluation/eval.py --conf confs/dtu_mlp_3views.conf --checkpoint ../pretrained_models/dtu_3views_mlp/scan65.pth --scan_id 65 --resolution 512 --eval_rendering --evals_folder ../pretrained_results

You can also run the following script to extract all the meshes:

python scripts/extract_all_meshes_from_pretrained_models.py

High-resolution Cues

Here we privode script to generate high-resolution cues, and training with high-resolution cues. Please refer to our supplementary for more details.

First you need to download the Tanks and Temples dataset from here and unzip it to data/tanksandtemples. Then you can run the script to create overlapped patches

cd preprocess
python generate_high_res_map.py --mode create_patches

and run the Omnidata model to predict monocular cues for each patch

python extract_monocular_cues.py --task depth --img_path ./highres_tmp/scan1/image/ --output_path ./highres_tmp/scan1 --omnidata_path YOUR_OMNIDATA_PATH --pretrained_models PRETRAINED_MODELS
python extract_monocular_cues.py --task depth --img_path ./highres_tmp/scan1/image/ --output_path ./highres_tmp/scan1 --omnidata_path YOUR_OMNIDATA_PATH --pretrained_models PRETRAINED_MODELS

This step will take a long time (~2 hours) since there are many patches and the model only use a batch size of 1.

Then run the script again to merge the output of Omnidata.

python generate_high_res_map.py --mode merge_patches

Now you can train the model with

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8 python -m torch.distributed.launch --nproc_per_node 8 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/tnt_highres_grids_courtroom.conf

Please note that the script for generating high-resolution cues only works for the Tanks and Temples dataset. You need to adapt it if you want to apply to other dataset.

Acknowledgements

This project is built upon VolSDF. We use pretrained Omnidata for monocular depth and normal extraction. Cuda implementation of Multi-Resolution hash encoding is based on torch-ngp. Evaluation scripts for DTU, Replica, and ScanNet are taken from DTUeval-python, Nice-SLAM and manhattan-sdf respectively. We thank all the authors for their great work and repos.

Citation

If you find our code or paper useful, please cite

@article{Yu2022MonoSDF,
  author    = {Yu, Zehao and Peng, Songyou and Niemeyer, Michael and Sattler, Torsten and Geiger, Andreas},
  title     = {MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction},
  journal   = {Advances in Neural Information Processing Systems (NeurIPS)},
  year      = {2022},
}

monosdf's People

Contributors

niujinshuchong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monosdf's Issues

Question about paper

Hi, I have some doubts about formula 8 in the paper. I can't find a similar calculation method in reference [75]. Could you please give me some tips? Thank you.

Has the training script been tried on Windows?

There are a lot of blockers with the distributed training when it comes to a windows machine.

  • backend needs to be set to gloo
  • Ninja needs to be installed separately
  • local_rank are missing in the sample training command.
    As of now, I have not managed to progress further with the training, as there seems to be another error

Please excuse my ignorance if I have missed something obvious. Your guidance will be invaluable to me. Thank you.

Edit: It might be a problem with my torch installation, so I'll repeat the steps on Linux as well. Sorry for the inconvenience.

Single-res grids model and config files

Thank you so much for sharing such interesting work!
I found some experiments on Single-res grids in the paper, and I would like to reproduce these results. Can you share the model and configuration files?

Replica dataset camera path

Hi, thanks for a great work!
The Replica dataset is used in experiments, so I was wondering would you publish the camera path for each scene?

Code releasing.

Hi,
Thanks for sharing your great job! And when will you release the code?

Question about the normal computed in Eq(10) in the paper.

Hi,

Before all, thank you for your inspiring work!

I would like to ask some questions about the \hat{N}(r) computed in equation (10).
image

If I understand it correctly, the direction of the normal vector calculated at the sampling points \hat{n}_r^i on each ray might be different. Then why does the volume rendering formula used to calculate \hat{N}(r) can represent the surface intersecting the current ray

Thank you for your time!

RuntimeError: Error building extension '_hash_encoder' (Has referred to issue#19)

Hi, thanks for your wonderful work!

When running command to train monosdf, the following error is reported. I'm running it on Ubuntu 20.04 machine.

I have looked at issue #19 and command the line, but the error still remains (as shown below).

I have installed cudatoolkit-11.3 and cudatoolket-dev-11.3. You might refer to the conda list below.

The error log:

(nf22) rayne@phil-OMEN-by-HP-45L-Gaming-Desktop-GT22-0xxx:~/code/monosdf/code$ CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/dtu_mlp_3views.conf  --scan_id 65
/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
RANK and WORLD_SIZE in environ: 0/1
0
shell command : training/exp_runner.py --local_rank=0 --conf confs/dtu_mlp_3views.conf --scan_id 65
Loading data ...
Finish loading data. Data-set size: 49
Detected CUDA files, patching ldflags
Emitting ninja build file ./tmp_build/build.ninja...
Building extension module _hash_encoder...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /home/rayne/anaconda3/envs/nf22/bin/nvcc  -DTORCH_EXTENSION_NAME=_hash_encoder -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/TH -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/THC -isystem /home/rayne/anaconda3/envs/nf22/include -isystem /home/rayne/anaconda3/envs/nf22/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -std=c++14 -allow-unsupported-compiler -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /home/rayne/code/monosdf/code/hashencoder/src/hashencoder.cu -o hashencoder.cuda.o 
FAILED: hashencoder.cuda.o 
/home/rayne/anaconda3/envs/nf22/bin/nvcc  -DTORCH_EXTENSION_NAME=_hash_encoder -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/TH -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/THC -isystem /home/rayne/anaconda3/envs/nf22/include -isystem /home/rayne/anaconda3/envs/nf22/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -std=c++14 -allow-unsupported-compiler -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /home/rayne/code/monosdf/code/hashencoder/src/hashencoder.cu -o hashencoder.cuda.o 
In file included from /home/rayne/anaconda3/envs/nf22/include/thrust/system/cuda/detail/execution_policy.h:33:0,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/iterator/detail/device_system_tag.h:23,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/iterator/iterator_traits.h:111,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/detail/type_traits/pointer_traits.h:23,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/type_traits/is_contiguous_iterator.h:27,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/type_traits/is_trivially_relocatable.h:19,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/detail/complex/complex.inl:20,
                 from /home/rayne/anaconda3/envs/nf22/include/thrust/complex.h:1031,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/util/complex.h:8,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/util/Half.h:15,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/core/ScalarType.h:5,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/core/StorageImpl.h:4,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/core/Storage.h:3,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/core/TensorImpl.h:8,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/c10/core/GeneratorImpl.h:12,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/ATen/core/Generator.h:22,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/ATen/Context.h:4,
                 from /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/ATen/cuda/CUDAContext.h:14,
                 from /home/rayne/code/monosdf/code/hashencoder/src/hashencoder.cu:5:
/home/rayne/anaconda3/envs/nf22/include/thrust/system/cuda/config.h:78:2: error: #error The version of CUB in your include path is not compatible with this release of Thrust. CUB is now included in the CUDA Toolkit, so you no longer need to use your own checkout of CUB. Define THRUST_IGNORE_CUB_VERSION_CHECK to ignore this.
 #error The version of CUB in your include path is not compatible with this release of Thrust. CUB is now included in the CUDA Toolkit, so you no longer need to use your own checkout of CUB. Define THRUST_IGNORE_CUB_VERSION_CHECK to ignore this.
  ^~~~~
[2/3] c++ -MMD -MF bindings.o.d -DTORCH_EXTENSION_NAME=_hash_encoder -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/TH -isystem /home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/include/THC -isystem /home/rayne/anaconda3/envs/nf22/include -isystem /home/rayne/anaconda3/envs/nf22/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -O3 -std=c++14 -c /home/rayne/code/monosdf/code/hashencoder/src/bindings.cpp -o bindings.o 
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1740, in _run_ninja_build
    subprocess.run(
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "training/exp_runner.py", line 58, in <module>
    trainrunner = MonoSDFTrainRunner(conf=opt.conf,
  File "/home/rayne/code/monosdf/code/../code/training/monosdf_train.py", line 107, in __init__
    self.model = utils.get_class(self.conf.get_string('train.model_class'))(conf=conf_model)
  File "/home/rayne/code/monosdf/code/../code/utils/general.py", line 17, in get_class
    m = __import__(module)
  File "/home/rayne/code/monosdf/code/../code/model/network.py", line 140, in <module>
    from hashencoder.hashgrid import _hash_encode, HashEncoder
  File "/home/rayne/code/monosdf/code/../code/hashencoder/__init__.py", line 1, in <module>
    from .hashgrid import HashEncoder
  File "/home/rayne/code/monosdf/code/../code/hashencoder/hashgrid.py", line 12, in <module>
    from .backend import _backend
  File "/home/rayne/code/monosdf/code/../code/hashencoder/backend.py", line 10, in <module>
    _backend = load(name='_hash_encoder',
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1144, in load
    return _jit_compile(
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1357, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1469, in _write_ninja_file_and_build_library
    _run_ninja_build(
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1756, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension '_hash_encoder'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1326120) of binary: /home/rayne/anaconda3/envs/nf22/bin/python
Traceback (most recent call last):
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
    elastic_launch(
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/rayne/anaconda3/envs/nf22/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
training/exp_runner.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-11-20_14:10:06
  host      : phil-OMEN-by-HP-45L-Gaming-Desktop-GT22-0xxx
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 1326120)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

The conda list:

#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             5.1                       1_gnu  
absl-py                   1.3.0                    pypi_0    pypi
addict                    2.4.0                    pypi_0    pypi
anyio                     3.6.2                    pypi_0    pypi
argon2-cffi               21.3.0                   pypi_0    pypi
argon2-cffi-bindings      21.2.0                   pypi_0    pypi
asttokens                 2.1.0                    pypi_0    pypi
attrs                     22.1.0                   pypi_0    pypi
backcall                  0.2.0                    pypi_0    pypi
beautifulsoup4            4.11.1                   pypi_0    pypi
blas                      1.0                         mkl  
bleach                    5.0.1                    pypi_0    pypi
brotlipy                  0.7.0           py38h27cfd23_1003  
bzip2                     1.0.8                h7b6447c_0  
ca-certificates           2022.10.11           h06a4308_0  
cachetools                5.2.0                    pypi_0    pypi
certifi                   2022.9.24        py38h06a4308_0  
cffi                      1.15.1           py38h74dc2b5_0  
charset-normalizer        2.0.4              pyhd3eb1b0_0  
click                     8.1.3                    pypi_0    pypi
configargparse            1.5.3                    pypi_0    pypi
cryptography              38.0.1           py38h9ce1e76_0  
cudatoolkit               11.3.1               h2bc3f7f_2  
cudatoolkit-dev           11.3.1           py38h497a2fe_0    conda-forge
cycler                    0.11.0                   pypi_0    pypi
dash                      2.7.0                    pypi_0    pypi
dash-core-components      2.0.0                    pypi_0    pypi
dash-html-components      2.0.0                    pypi_0    pypi
dash-table                5.0.0                    pypi_0    pypi
debugpy                   1.6.3                    pypi_0    pypi
decorator                 5.1.1                    pypi_0    pypi
defusedxml                0.7.1                    pypi_0    pypi
entrypoints               0.4                      pypi_0    pypi
executing                 1.2.0                    pypi_0    pypi
fastjsonschema            2.16.2                   pypi_0    pypi
ffmpeg                    4.3                  hf484d3e_0    pytorch
flask                     2.2.2                    pypi_0    pypi
fonttools                 4.38.0                   pypi_0    pypi
freetype                  2.12.1               h4a9f257_0  
fvcore                    0.1.5.post20210915            py38    fvcore
giflib                    5.2.1                h7b6447c_0  
gmp                       6.2.1                h295c915_3  
gnutls                    3.6.15               he1e5248_0  
google-auth               2.14.1                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
grpcio                    1.50.0                   pypi_0    pypi
idna                      3.4              py38h06a4308_0  
imageio                   2.16.1                   pypi_0    pypi
importlib-metadata        5.0.0                    pypi_0    pypi
importlib-resources       5.10.0                   pypi_0    pypi
intel-openmp              2021.4.0          h06a4308_3561  
iopath                    0.1.9                      py38    iopath
ipykernel                 6.17.1                   pypi_0    pypi
ipython                   8.6.0                    pypi_0    pypi
ipython-genutils          0.2.0                    pypi_0    pypi
ipywidgets                8.0.2                    pypi_0    pypi
itsdangerous              2.1.2                    pypi_0    pypi
jedi                      0.18.1                   pypi_0    pypi
jinja2                    3.1.2                    pypi_0    pypi
joblib                    1.1.0                    pypi_0    pypi
jpeg                      9e                   h7f8727e_0  
jsonschema                4.17.0                   pypi_0    pypi
jupyter                   1.0.0                    pypi_0    pypi
jupyter-client            7.4.7                    pypi_0    pypi
jupyter-console           6.4.4                    pypi_0    pypi
jupyter-core              5.0.0                    pypi_0    pypi
jupyter-server            1.23.2                   pypi_0    pypi
jupyterlab-pygments       0.2.2                    pypi_0    pypi
jupyterlab-widgets        3.0.3                    pypi_0    pypi
kiwisolver                1.4.4                    pypi_0    pypi
kornia                    0.6.4                    pypi_0    pypi
lame                      3.100                h7b6447c_0  
lcms2                     2.12                 h3be6417_0  
ld_impl_linux-64          2.38                 h1181459_1  
lerc                      3.0                  h295c915_0  
libdeflate                1.8                  h7f8727e_5  
libffi                    3.4.2                h295c915_4  
libgcc-ng                 11.2.0               h1234567_1  
libgomp                   11.2.0               h1234567_1  
libiconv                  1.16                 h7f8727e_2  
libidn2                   2.3.2                h7f8727e_0  
libpng                    1.6.37               hbc83047_0  
libstdcxx-ng              11.2.0               h1234567_1  
libtasn1                  4.16.0               h27cfd23_0  
libtiff                   4.4.0                hecacb30_2  
libunistring              0.9.10               h27cfd23_0  
libuv                     1.40.0               h7b6447c_0  
libwebp                   1.2.4                h11a3e52_0  
libwebp-base              1.2.4                h5eee18b_0  
lpips                     0.1.4                    pypi_0    pypi
lz4-c                     1.9.3                h295c915_1  
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.1                    pypi_0    pypi
matplotlib                3.5.1                    pypi_0    pypi
matplotlib-inline         0.1.6                    pypi_0    pypi
mistune                   2.0.4                    pypi_0    pypi
mkl                       2021.4.0           h06a4308_640  
mkl-service               2.4.0            py38h7f8727e_0  
mkl_fft                   1.3.1            py38hd3c417c_0  
mkl_random                1.2.2            py38h51133e4_0  
msgpack                   1.0.4                    pypi_0    pypi
msgpack-numpy             0.4.8                    pypi_0    pypi
nbclassic                 0.4.8                    pypi_0    pypi
nbclient                  0.7.0                    pypi_0    pypi
nbconvert                 7.2.5                    pypi_0    pypi
nbformat                  5.5.0                    pypi_0    pypi
ncurses                   6.3                  h5eee18b_3  
nest-asyncio              1.5.6                    pypi_0    pypi
nettle                    3.7.3                hbbd107a_1  
networkx                  2.8.8                    pypi_0    pypi
ninja                     1.11.1                   pypi_0    pypi
notebook                  6.5.2                    pypi_0    pypi
notebook-shim             0.2.2                    pypi_0    pypi
numpy                     1.21.5                   pypi_0    pypi
nvidiacub                 1.10.0                        0    bottler
oauthlib                  3.2.2                    pypi_0    pypi
open3d                    0.16.0                   pypi_0    pypi
opencv-python             4.5.5.64                 pypi_0    pypi
openh264                  2.1.1                h4ff587b_0  
openssl                   1.1.1s               h7f8727e_0  
packaging                 21.3                     pypi_0    pypi
pandas                    1.3.5                    pypi_0    pypi
pandocfilters             1.5.0                    pypi_0    pypi
parso                     0.8.3                    pypi_0    pypi
partio                    1.0.0                    pypi_0    pypi
pexpect                   4.8.0                    pypi_0    pypi
pickleshare               0.7.5                    pypi_0    pypi
pillow                    9.1.0                    pypi_0    pypi
pip                       22.2.2           py38h06a4308_0  
pkgutil-resolve-name      1.3.10                   pypi_0    pypi
platformdirs              2.5.4                    pypi_0    pypi
plotly                    5.11.0                   pypi_0    pypi
plyfile                   0.7.4                    pypi_0    pypi
portalocker               2.3.0            py38h06a4308_0  
prometheus-client         0.15.0                   pypi_0    pypi
prompt-toolkit            3.0.32                   pypi_0    pypi
protobuf                  3.19.6                   pypi_0    pypi
psutil                    5.9.4                    pypi_0    pypi
ptyprocess                0.7.0                    pypi_0    pypi
pure-eval                 0.2.2                    pypi_0    pypi
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
pycparser                 2.21               pyhd3eb1b0_0  
pygments                  2.13.0                   pypi_0    pypi
pyhocon                   0.3.59                   pypi_0    pypi
pymcubes                  0.1.2                    pypi_0    pypi
pyopenssl                 22.0.0             pyhd3eb1b0_0  
pyparsing                 2.4.7                    pypi_0    pypi
pyquaternion              0.9.9                    pypi_0    pypi
pyrsistent                0.19.2                   pypi_0    pypi
pysocks                   1.7.1            py38h06a4308_0  
python                    3.8.15               h3fd9d12_0  
python-dateutil           2.8.2                    pypi_0    pypi
python_abi                3.8                      2_cp38    conda-forge
pytorch                   1.11.0          py3.8_cuda11.3_cudnn8.2.0_0    pytorch
pytorch-mutex             1.0                        cuda    pytorch
pytorch3d                 0.7.1                     dev_0    <develop>
pytz                      2022.6                   pypi_0    pypi
pywavelets                1.4.1                    pypi_0    pypi
pyyaml                    6.0                      pypi_0    pypi
pyzmq                     24.0.1                   pypi_0    pypi
qtconsole                 5.4.0                    pypi_0    pypi
qtpy                      2.3.0                    pypi_0    pypi
readline                  8.2                  h5eee18b_0  
requests                  2.28.1           py38h06a4308_0  
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-image              0.19.2                   pypi_0    pypi
scikit-learn              1.0.2                    pypi_0    pypi
scipy                     1.7.3                    pypi_0    pypi
send2trash                1.8.0                    pypi_0    pypi
setuptools                65.5.0           py38h06a4308_0  
six                       1.16.0             pyhd3eb1b0_1  
sniffio                   1.3.0                    pypi_0    pypi
soupsieve                 2.3.2.post1              pypi_0    pypi
sqlite                    3.39.3               h5082296_0  
stack-data                0.6.1                    pypi_0    pypi
tabulate                  0.8.10           py38h06a4308_0  
tenacity                  8.1.0                    pypi_0    pypi
tensorboard               2.8.0                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
termcolor                 2.1.0            py38h06a4308_0  
terminado                 0.17.0                   pypi_0    pypi
threadpoolctl             3.1.0                    pypi_0    pypi
tifffile                  2022.10.10               pypi_0    pypi
tinycss2                  1.2.1                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0  
torchaudio                0.11.0               py38_cu113    pytorch
torchvision               0.12.0               py38_cu113    pytorch
tornado                   6.2                      pypi_0    pypi
tqdm                      4.64.1           py38h06a4308_0  
traitlets                 5.5.0                    pypi_0    pypi
trimesh                   3.10.8                   pypi_0    pypi
typing_extensions         4.3.0            py38h06a4308_0  
urllib3                   1.26.12          py38h06a4308_0  
wcwidth                   0.2.5                    pypi_0    pypi
webencodings              0.5.1                    pypi_0    pypi
websocket-client          1.4.2                    pypi_0    pypi
werkzeug                  2.2.2                    pypi_0    pypi
wheel                     0.37.1             pyhd3eb1b0_0  
widgetsnbextension        4.0.3                    pypi_0    pypi
xz                        5.2.6                h5eee18b_0  
yacs                      0.1.6              pyhd3eb1b0_1  
yaml                      0.2.5                h7b6447c_0  
zipp                      3.10.0                   pypi_0    pypi
zlib                      1.2.13               h5eee18b_0  
zstandard                 0.19.0                   pypi_0    pypi
zstd                      1.5.2                ha4553b6_0

Could you give me a hint about that? Thanks for your help!

About the figure 19 in your final version paper

Hi,

Thanks for your code sharing, I wonder what the different columns of Figure 19 means. Are there both with depth and normal cues? The caption of Fig.19 did not mention about that

Since I try to reproduce the single-res grids with all views without any cues and found the surface is noisy like followings. Therefore I wonder whether your single-res without any cues may cause this.

Regards

image

About the training time

Hi, thanks for your excellent work. When re-producing the scanner scene 1 results, I found that it takes more than 1 day to train an MLP model on a single NVIDIA RTX3090 GPU, which is much longer than the reported time in your paper. Is this normal? Since I found that you always train 1000 epochs for each time, thus causing different training time for each scene.

Code release?

Hi, congratulations on your acceptance to NeurIPS! Any idea on when the code will be made available? Thanks!

Ways to reduce GPU memory

Hi,

Thanks for the great work and for sharing the code. I met with a GPU OOM problem when training on the ScanNet / Replica dataset. I only have a 3060 available with 12GB memory. I checked the batch size in the configuration but it was set to 1 so I could not reduce it further. I wonder if there is any simple way to reduce memory usage so I can quickly run some experiments?

Higher Resolution inputs from Omnidata

Appreciate you all open-sourcing the code! I was wondering about your presentation here, it's mentioned using higher resolution cues (depth/normals). Are there any plans on releasing the code that does the sliding window + merging to generate these higher resolution cues?

Thanks again

Can monosdf work on real captured scene(single object)?

Hi!Great thanks for your great work and released code :)

I have a set of real captured dataset(around 200 photos) on single object,and then removed the background(that is make background transparent)
I'm wondering can monosdf work on this kind of dataset too?

Could monosdf work with CUDA11.6?

Hello! Thanks for the amazing work!

I'm wondering could this project work with cudatoolkit==11.6 and cudatoolkit-dev==11.6 instead of 11.3?

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE

Hi, I was trying to run some pretrained model, but I keep getting the following error:
Screenshot from 2022-10-11 11-52-14
I followed all the steps in the readme to create the environment and install the correct libraries. Any ideas on why this happens? Thank you in advance for the help.

About the TNT dataset download link

Thanks for your excellent work!
We meet some trouble when downloading the TNT dataset:
image

Would you please update the download link?

Good luck!

AttributeError: 'MonoSDFTrainRunner' object has no attribute 'checkpoints_path'

It seems that there is something wrong in the multi-GPU process when I run the code. It exits in 660 epochs when I run scannet 4 dataset, and it exits in 300 epochs when I run scannet 1 dataset.

scannet_mlp_4_2022_10_21_21_02_11 [660] (302/303): loss = 0.08707078546285629, rgb_loss = 0.03575834631919861, eikonal_loss = 0.014714548364281654, psnr = 23.60033416748047, bete=0.0009753454942256212, alpha=1025.2777153535253
Traceback (most recent call last):
  File "training/exp_runner.py", line 71, in <module>
    trainrunner.run()
  File "/data/niuzy/Python/monosdf/code/../code/training/monosdf_train.py", line 288, in run
    self.save_checkpoints(epoch)
  File "/data/niuzy/Python/monosdf/code/../code/training/monosdf_train.py", line 170, in save_checkpoints
    os.path.join(self.checkpoints_path, self.model_params_subdir, str(epoch) + ".pth"))
AttributeError: 'MonoSDFTrainRunner' object has no attribute 'checkpoints_path'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 656559) of binary: /data/niuzy/anaconda3/envs/monosdf/bin/python
Traceback (most recent call last):
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
    elastic_launch(
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/data/niuzy/anaconda3/envs/monosdf/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
training/exp_runner.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-10-22_11:35:19
  host      : amax
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 656559)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

High resolution real scene training

Hello, I had a problem trying to train high resolution NeRF using real-life images to get a better mesh. Beta always changed to none after a few steps. My high resolution data processing is the same as the one you provided. And my parameters is the same as "tnt_highres_grids_courtroom.conf". Could you please give me some advice?

Question about positional encoding

Hi, I would like to know what encoding functions you have implemented in your work. In term of the color network, is it necessary to map the position x and view direction v to a higher dimension space with encoding function?

Eikonal loss with Multi-Resolution Feature Grids

Thanks for your great work!
As far as I know, eikonal loss requires the use of automatic differentiation to calculate higher order derivatives, like volsdf or neus. In this way, the second-order gradient can be used to optimize the eikonal loss, which is the first-order gradient. But how to implement the second order gradient of the feature grid?

equ. (8)

Hello,
Could you explain a bit why the density function needs to be scaled by 1/\beta in the equation (8) ?
image
As \beta goes to zero, 1/\beta will go infinite.

How does monosdf get the pose of tanks and temples dataset ?

Thank you for this great work, the result of tanks and temples dataset looks impressive!

Recently I tested volsdf on this dataset , using the pose provided by colmap, but basically could not reconstruct a reasonable shape. I notice that the test datasets are also tested. How does monosdf get pose of this dataset?

Thanks for your help and looking forward to your reply

The depth and normal about TnT

Thanks for your amazing work!

I have a few questions about the depths and normals for the TnT dataset.

The origin size of images in TnT is [1080, 1920], and the size of your provided images and estimated depth&normal maps is [1152, 2048], why do you scale the image?

Otherwise, I run the script preprocess/extract_monocular_cues.py to estimate depth&normal map, but it only works on the [384, 384] images due to the limitation of the omnidata.

So, how can you obtain the depth&normal maps for the TnT dataset?

Good luck!

multigpu training

hi, thanks for your great work! i got some questions about the multi gpu training, i set the code as the readme
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8 python -m torch.distributed.launch --nproc_per_node 8 --nnodes=1 --node_rank=0 training/exp_runner.py --conf confs/tnt_highres_grids_courtroom.conf
but it still uses only one gpu,

looking forward to your reply, thanks! @niujinshuchong

The estimated depth results of ScanNet via omnidata model

Thanks for your amazing work!

As you said in the paper, you leverage the omnidata model to estimated both monocular depth and normal. We've downloaded the pretrained provided by the official, and it really performs well in the details.
However, we found that the range of the estimated depth is so small. Here is the result of one image from 0616 scene in ScanNet.
c12c2e276c3e072c1198b4d33fce781

In our acknowledge, the range of depth in ScanNet should be 0~3. How do you process such estimated depth?

Good luck :)

Some questions about the code

Thanks for your work. I'm a green hand. I do not understand the get_sphere_intersections function in code/utils/rend_util.py. Could you please give me some hints? How does it compute the intersection?

Problem about the Grid model and MLP model.

Hi, thanks for your excellent work!
I am trying to read your code. I find the MLP model defined by the config file is a little different from the one described in your paper. On your code side, you set Grid_MLP = True for MLP model, which means you use feature grids and it's not a pure MLP model, right?

depth loss

    culart

Hi, the monocular depth output is small and it's in the range of 0.01 to 0.04, and the loss will be very small so we simply scale them to some large value.

There's the same question about scale scaling. Since a scale and shift are calculated in advance when the depth loss is calculated, what is the function of the following line in the code?

# we should use unnormalized ray direction for depth
ray_dirs_tmp, _ = rend_util.get_camera_params(uv, torch.eye(4).to(pose.device)[None], intrinsics)
depth_scale = ray_dirs_tmp[0, :, 2:]
depth_values = depth_scale * depth_values

Originally posted by @LiXinghui-666 in #18 (comment)

Speeding up training time

Hi, thanks for open sourcing this amazing work.

I was wondering why it takes so long to train with multi-resolution grids from Instant-NGP. Intuitively, I would expect it to be sort of "instant". One of the authors of Instant-NGP recently gave a talk at ECCV (https://vimeo.com/764661490) claiming that the speed-up is given by 1) the hash encoder, 2) fully-fused neural networks (i.e. from https://github.com/NVlabs/tiny-cuda-nn) and 3) efficient rendering. You clearly have 1), but maybe missing 2) and 3).

Is this the case, or am I missing something else? Do you think it's possible to achieve "instant" training of MonoSDF somehow? Thank you in advance for your answer!

evaluation result on tanks and temples dataset

Hi, thank you for sharing this great work, the mesh created by monosdf looks very impressive! Have you evaluated the result on tanks and temples dataset? I'd like to know the metric accuracy of monosdf.

question about how to make the video clip

Hi, thanks for open sourcing this amazing work. I have test the code and the result is fine.
I have a non-algorithmic question for you. You showed the awesome mesh reconstruction results on the project page. I would like to know what tools you used to render that video. The input was mesh and the output was a video clip. @niujinshuchong

some questions about depth_loss

Thank you for your excellent work. I would like to know why is this operation performed on depth_gt when depth loss is calculated:(depth_gt * 50 + 0.5). Corresponding to this line of code:

def get_depth_loss(self, depth_pred, depth_gt, mask):
        # TODO remove hard-coded scaling for depth
        return self.depth_loss(depth_pred.reshape(1, 32, 32), (depth_gt * 50 + 0.5).reshape(1, 32, 32), mask.reshape(1, 32, 32))

Missing data and config files for DTU

Hi, thanks for sharing such wonderful work!

It seems that some data and config files for DTU are missing, including:

  • config files for MLP experiments (only MLPGrid version is provided)
  • DTU_padded_highres dataset, referenced in dtu_mlp_fullres_allviews.conf

Could you share those files, too?
Thanks in advance!

RuntimeError: shape '[1, 147456, 3]' is invalid for input of size 5760000

Trying to train scan24 with the multi-res grid all-views. Training seems to get through the first epoch

Traceback (most recent call last):
  File "training/exp_runner.py", line 71, in <module>
    trainrunner.run()
  File "monosdf/code/../code/training/monosdf_train.py", line 225, in run
    plot_data = self.get_plot_data(model_input, model_outputs, model_input['pose'], ground_truth['rgb'], ground_truth['normal'], ground_truth['depth'])
  File "monosdf/code/../code/training/monosdf_train.py", line 295, in get_plot_data
    rgb_eval = model_outputs['rgb_values'].reshape(batch_size, num_samples, 3)
RuntimeError: shape '[1, 147456, 3]' is invalid for input of size 5760000

Question about coordinate system

Hi,

Thanks for the great work. I have a question about the coordinate system convention you're using, is it the same as the DTU dataset which is RDF(x right, y down, z front/in)? I'm wondering what's the right transformation for Blender dataset.

Thanks!

About scene boundary

Hi,
The depth and normal information of the floor scene was lost,when I train high resolution real scene images. Could you please give me some advice?
normal_710_18

batch_size and trainning time

hi, thanks for your great work!

  1. why you use batch_size = 1 in your code, did your try to imcrease batch_size ? i got some errors when i just increaset batch_size from 1 to 2.
  2. can this codes run with multi-gpu to improve training time?
  3. what params can i try to improve the quality of the sdf and psnr?

looking forward to your reply! thanks! @niujinshuchong

Results of MLP and grids

Thanks for sharing the code. We have tested the released pre-trained weights of both mlp and grids on scan 1.

But we found that the grids, which uses the instant-ngp, is worse than the simple mlp:
scannet_grids.conf:
image
scannet_mlp.conf:
image

I am not sure it's not an expected result of the paper. Can you give me some ideas about the results? Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.