Giter Club home page Giter Club logo

splatam's People

Contributors

jaykarhade avatar nik-v9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

splatam's Issues

Q about pose estimation

It appears that your diff-gaussian-rasterization-w-depth does not seem to return gradients with respect to camera poses during backpropagation. Is it something I overlooked, or is it intentional that you are not optimizing camera poses through gradients with respect to camera poses?

Visualisation of SLAM on benchmarking datasets

Hi Authors, Is there a way to visualize the reconstruction while the slam runs on the benchmarking dataset. I just see a command for running the splatam.py file python scripts/splatam.py configs/replica/splatam.py in case of replica dataset. Visualization here refers to the gif of camera tracking and reconstruction which is there in readme

Fix hardcoded absolute paths

Thank you for publishing the code! The performance seems great in the benchmarks. However, to improve replicability, please consider fixing the following things:

  1. Hardcoded paths to your own home / data folders:

  2. The offline script mentioned in the README (bash bash_scripts/nerfcapture2dataset.bash). It seems to contain parameters not supported by the Python script https://github.com/spla-tam/SplaTAM/blob/main/bash_scripts/nerfcapture2dataset.bash. In addition, it seems the depth data recorded by the current version of the Nerfcapture app is actually broken due to this issue jc211/NeRFCapture#10 (comment). (produces large upscaled RGB PNGs instead of, e.g., 16-bit grayscale depth)

Supporting No Depth Input - RGB SLAM

Hi there, at first thx for your great work, I install the env and here is my sitution:

I cannot use iPhone to connect the server cause it's not on same WIFI but on same local network.
So I decide to capture the dataset offline by nerfcapture, I can get a dir of images and transform.json

Then I tried to change the dataset to fit the
python scripts/splatam.py configs/iphone/splatam.py
For instance, I change the dir of images/0 to rgb/0.png, then keep others same.

But I directly run the ``python scripts/splatam.py configs/iphone/splatam.py`, I got a error of :

  File "/root/anaconda3/envs/3dgs/lib/python3.9/site-packages/imageio/core/imopen.py", line 113, in imopen
    request = Request(uri, io_mode, format_hint=format_hint, extension=extension)
  File "/root/anaconda3/envs/3dgs/lib/python3.9/site-packages/imageio/core/request.py", line 247, in __init__
    self._parse_uri(uri)
  File "/root/anaconda3/envs/3dgs/lib/python3.9/site-packages/imageio/core/request.py", line 407, in _parse_uri
    raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/home/xingchenzhou/code/git/SplaTAM/experiments/iPhone_Captures/offline_demo/depth/0.png'

This should not happend, cause I saw the dataset convert python script, the depth is an option not a must.

I also tried to run other code, I get:

FileNotFoundError: [Errno 2] No such file or directory: '././experiments/iPhone_Captures/offline_demo/SplaTAM_iPhone/params.npz'

So, any idea?

Questions about learning rate

Why do you set learning rate to 0 in mapping stage, and , because the learning rate has been defined before (in the params groups), I don't seen the meaning of setting lr and eps.
屏幕截图 2024-01-10 151113

Silhouette Rendering

Hi, what a great job! However, there is one question that consistently bothers me. How to get the silhouette?
I have read the paper and checked the code. But I also can't understand the principle of getting silhouette.

In the def get_depth_and_silhouette(pts_3D, w2c), I find that the data stored by depth_silhouette is [depth_z, 1, depth_z_sq].

def transformed_params2depthplussilhouette(params, w2c, transformed_pts):
    rendervar = {
        'means3D': transformed_pts,
        'colors_precomp': get_depth_and_silhouette(transformed_pts, w2c),
        'rotations': F.normalize(params['unnorm_rotations']),
        'opacities': torch.sigmoid(params['logit_opacities']),
        'scales': torch.exp(torch.tile(params['log_scales'], (1, 3))),
        'means2D': torch.zeros_like(params['means3D'], requires_grad=True, device="cuda") + 0
    }
    return rendervar


def get_depth_and_silhouette(pts_3D, w2c):
    """
    Function to compute depth and silhouette for each gaussian.
    These are evaluated at gaussian center.
    """
    # Depth of each gaussian center in camera frame
    pts4 = torch.cat((pts_3D, torch.ones_like(pts_3D[:, :1])), dim=-1)
    pts_in_cam = (w2c @ pts4.transpose(0, 1)).transpose(0, 1)
    depth_z = pts_in_cam[:, 2].unsqueeze(-1) # [num_gaussians, 1]
    depth_z_sq = torch.square(depth_z) # [num_gaussians, 1]

    # Depth and Silhouette
    depth_silhouette = torch.zeros((pts_3D.shape[0], 3)).cuda().float()
    depth_silhouette[:, 0] = depth_z.squeeze(-1)
    depth_silhouette[:, 1] = 1.0
    depth_silhouette[:, 2] = depth_z_sq.squeeze(-1)
    
    return depth_silhouette
# Initialize Render Variables
depth_sil_rendervar = transformed_params2depthplussilhouette(params, curr_data['w2c'], transformed_pts)

Then, the first variable depth_sil obtained after Gaussian rasterization rendering.

# Depth & Silhouette Rendering
depth_sil, _, _, = Renderer(raster_settings=curr_data['cam'])(**depth_sil_rendervar)

Finally, we can get silhouette directly from the second dimension of that variable depth_sil

silhouette = depth_sil[1, :, :]

Visualization Issues (Open3D Version Issue)

hi,I am very exciting to your work and want to use your code in our scene. I downloaded Replica datatset and want to test your code first. According to your illustration, I use the command "python3 viz_scripts/final_recon.py configs/replica/splatam.py". The picture below is my final visualization about room0. I find that the reconstruction result seems not correct. This is not the whole room, but only a part of room. What's wrong with this?
1

Online & Offline Demo Debugging

Hi,the program is running well when I try to run the online demo and dataset collection.
However,when I try offine mode,I got some problems.

The detail steps I try is as follows:

  1. open the nerfcapture app on my iphone which has lidar
  2. switch to offine mode
  3. touch the "start" button
  4. keep touching "save frame"buttton for nearly 10 times
  5. touch the "end"button
    6.run bash bash_scripts/nerfcapture.bash configs/iphone/nerfcapture.py

then I got the mistake:

save_path experiments/iPhone_Captures/offline_demo already exists
System Paths:
/home/syzhao15/Desktop/CODE/SplaTAM
/home/syzhao15/Desktop/CODE/SplaTAM/scripts
/opt/ros/humble/lib/python3.10/site-packages
/opt/ros/humble/local/lib/python3.10/dist-packages
/home/syzhao15/miniforge3/envs/splatam/lib/python310.zip
/home/syzhao15/miniforge3/envs/splatam/lib/python3.10
/home/syzhao15/miniforge3/envs/splatam/lib/python3.10/lib-dynload
/home/syzhao15/miniforge3/envs/splatam/lib/python3.10/site-packages
Seed set to: 0 (type: <class 'int'>)
Loaded Config:
{'workdir': '././experiments/iPhone_Captures/offline_demo', 'run_name': 'SplaTAM_iPhone', 'overwrite': False, 'depth_scale': 10.0, 'num_frames': 10, 'seed': 0, 'primary_device': 'cuda:0', 'map_every': 1, 'keyframe_every': 2, 'mapping_window_size': 32, 'report_global_progress_every': 100, 'eval_every': 1, 'scene_radius_depth_ratio': 3, 'mean_sq_dist_method': 'projective', 'report_iter_progress': False, 'load_checkpoint': False, 'checkpoint_time_idx': 130, 'save_checkpoints': False, 'checkpoint_interval': 5, 'use_wandb': False, 'data': {'dataset_name': 'nerfcapture', 'basedir': './experiments/iPhone_Captures', 'sequence': 'offline_demo', 'desired_image_height': 720, 'desired_image_width': 960, 'densification_image_height': 360, 'densification_image_width': 480, 'start': 0, 'end': -1, 'stride': 1, 'num_frames': 10}, 'tracking': {'use_gt_poses': False, 'forward_prop': True, 'visualize_tracking_loss': False, 'num_iters': 60, 'use_sil_for_loss': True, 'sil_thres': 0.99, 'use_l1': True, 'use_depth_loss_thres': True, 'depth_loss_thres': 20000, 'ignore_outlier_depth_loss': False, 'use_uncertainty_for_loss_mask': False, 'use_uncertainty_for_loss': False, 'use_chamfer': False, 'loss_weights': {'im': 0.5, 'depth': 1.0}, 'lrs': {'means3D': 0.0, 'rgb_colors': 0.0, 'unnorm_rotations': 0.0, 'logit_opacities': 0.0, 'log_scales': 0.0, 'cam_unnorm_rots': 0.001, 'cam_trans': 0.004}}, 'mapping': {'num_iters': 60, 'add_new_gaussians': True, 'sil_thres': 0.5, 'use_l1': True, 'ignore_outlier_depth_loss': False, 'use_sil_for_loss': False, 'use_uncertainty_for_loss_mask': False, 'use_uncertainty_for_loss': False, 'use_chamfer': False, 'loss_weights': {'im': 0.5, 'depth': 1.0}, 'lrs': {'means3D': 0.0001, 'rgb_colors': 0.0025, 'unnorm_rotations': 0.001, 'logit_opacities': 0.05, 'log_scales': 0.001, 'cam_unnorm_rots': 0.0, 'cam_trans': 0.0}, 'prune_gaussians': True, 'pruning_dict': {'start_after': 0, 'remove_big_after': 0, 'stop_after': 20, 'prune_every': 20, 'removal_opacity_threshold': 0.005, 'final_removal_opacity_threshold': 0.005, 'reset_opacities': False, 'reset_opacities_every': 500}, 'use_gaussian_splatting_densification': False, 'densify_dict': {'start_after': 500, 'remove_big_after': 3000, 'stop_after': 5000, 'densify_every': 100, 'grad_thresh': 0.0002, 'num_to_split_into': 2, 'removal_opacity_threshold': 0.005, 'final_removal_opacity_threshold': 0.005, 'reset_opacities_every': 3000}}, 'viz': {'render_mode': 'color', 'offset_first_viz_cam': True, 'show_sil': False, 'visualize_cams': True, 'viz_w': 600, 'viz_h': 340, 'viz_near': 0.01, 'viz_far': 100.0, 'view_scale': 2, 'viz_fps': 5, 'enter_interactive_post_online': False}}
Loading Dataset ...
Traceback (most recent call last):
  File "/home/syzhao15/Desktop/CODE/SplaTAM/scripts/splatam.py", line 1007, in <module>
    rgbd_slam(experiment.config)
  File "/home/syzhao15/Desktop/CODE/SplaTAM/scripts/splatam.py", line 514, in rgbd_slam
    dataset = get_dataset(
  File "/home/syzhao15/Desktop/CODE/SplaTAM/scripts/splatam.py", line 73, in get_dataset
    return NeRFCaptureDataset(basedir, sequence, **kwargs)
  File "/home/syzhao15/Desktop/CODE/SplaTAM/datasets/gradslam_datasets/nerfcapture.py", line 39, in __init__
    self.cams_metadata = self.load_cams_metadata()
  File "/home/syzhao15/Desktop/CODE/SplaTAM/datasets/gradslam_datasets/nerfcapture.py", line 72, in load_cams_metadata
    cams_metadata = json.load(open(cams_metadata_path, "r"))
FileNotFoundError: [Errno 2] No such file or directory: './experiments/iPhone_Captures/offline_demo/transforms.json'
Seed set to: 0 (type: <class 'int'>)
Traceback (most recent call last):
  File "/home/syzhao15/Desktop/CODE/SplaTAM/viz_scripts/final_recon.py", line 297, in <module>
    visualize(scene_path, viz_cfg)
  File "/home/syzhao15/Desktop/CODE/SplaTAM/viz_scripts/final_recon.py", line 170, in visualize
    w2c, k = load_camera(cfg, scene_path)
  File "/home/syzhao15/Desktop/CODE/SplaTAM/viz_scripts/final_recon.py", line 26, in load_camera
    all_params = dict(np.load(scene_path, allow_pickle=True))
  File "/home/syzhao15/miniforge3/envs/splatam/lib/python3.10/site-packages/numpy/lib/npyio.py", line 427, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '././experiments/iPhone_Captures/offline_demo/SplaTAM_iPhone/params.npz'

Can anyone tell me how can i fix it? Is there any mistake with my steps metioned above?
Thanks for help!!!

Installation error during `pip install -r requirements.txt`

🤗 Thanks for making the code publicly available.

I used the following instructions to set up the environment on my Windows 10 laptop. (see full installation log below)

conda create -n splatam python=3.10
conda activate splatam
conda install -c "nvidia/label/cuda-11.6.0" cuda-toolkit
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install -r requirements.txt

But at the final step of the installation, pip shows the following error.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tea-console 0.0.6 requires rich~=9.11.0, but you have rich 13.7.0 which is incompatible.
typer 0.3.2 requires click<7.2.0,>=7.1.1, but you have click 8.1.7 which is incompatible.
Successfully installed Click-8.1.7 Flask-3.0.0 GitPython-3.1.40 Jinja2-3.1.2 MarkupSafe-2.1.3 Werkzeug-3.0.1 ansi2html-1.8.0 appdirs-1.4.4 asttokens-2.4.1 attrs-23.1.0 blinker-1.7.0 colorama-0.4.6 comm-0.2.0 configargparse-1.7 contourpy-1.2.0 cycler-0.12.1 cyclonedds-0.10.2 dash-2.14.2 dash-core-components-2.0.0 dash-html-components-2.0.0 dash-table-5.0.0 decorator-5.1.1 diff-gaussian-rasterization-0.0.0 docker-pycreds-0.4.0 exceptiongroup-1.2.0 executing-2.0.1 fastjsonschema-2.19.0 fonttools-4.46.0 gitdb-4.0.11 imageio-2.33.0 importlib-metadata-7.0.0 ipython-8.18.1 ipywidgets-8.1.1 itsdangerous-2.1.2 jedi-0.19.1 jsonschema-4.20.0 jsonschema-specifications-2023.11.2 jupyter_core-5.5.0 jupyterlab-widgets-3.0.9 kiwisolver-1.4.5 kornia-0.7.0 lightning-utilities-0.10.0 lpips-0.1.4 markdown-it-py-3.0.0 matplotlib-3.8.2 matplotlib-inline-0.1.6 mdurl-0.1.2 natsort-8.4.0 nbformat-5.5.0 nest-asyncio-1.5.8 open3d-0.16.0 opencv-python-4.8.1.78 packaging-23.2 parso-0.8.3 platformdirs-4.1.0 plotly-5.18.0 prompt-toolkit-3.0.41 protobuf-4.25.1 pure-eval-0.2.2 pygments-2.17.2 pyparsing-3.1.1 python-dateutil-2.8.2 pytorch-msssim-1.0.0 pywin32-306 pyyaml-6.0.1 referencing-0.32.0 retrying-1.3.4 rich-13.7.0 rich-click-1.7.2 rpds-py-0.13.2 scipy-1.11.4 sentry-sdk-1.38.0 setproctitle-1.3.3 six-1.16.0 smmap-5.0.1 stack-data-0.6.3 tenacity-8.2.3 torchmetrics-1.2.1 tqdm-4.65.0 traitlets-5.14.0 typing-extensions-4.8.0 wandb-0.16.1 wcwidth-0.2.12 widgetsnbextension-4.0.9 zipp-3.17.0

I'm not sure if the installation was successful, because on running:
(splatam) C:\Users\Nirmal\Documents\SplaTAM>bash bash_scripts/online_demo.bash configs/iphone/online_demo.py

I get the following error:

<3>WSL (8) ERROR: CreateProcessParseCommon:754: getpwuid(0) failed 2
Processing fstab with mount -a failed.

<3>WSL (8) ERROR: CreateProcessEntryCommon:331: getpwuid(0) failed 2
<3>WSL (8) ERROR: CreateProcessEntryCommon:502: execvpe /bin/bash failed 2
<3>WSL (8) ERROR: CreateProcessEntryCommon:505: Create process not expected to return

View the full installation log
(base) C:\WINDOWS\system32>conda env list
# conda environments:
#
base                  *  C:\Anaconda3
voldor                   C:\Users\Nirmal\.conda\envs\voldor
work                     C:\Users\Nirmal\.conda\envs\work
(base) C:\WINDOWS\system32>conda create -n splatam python=3.10
WARNING: A directory already exists at the target location 'C:\Anaconda3\envs\splatam'
but it is not a conda environment.
Continue creating environment (y/[n])? y

Channels:
 - defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: C:\Anaconda3\envs\splatam

  added / updated specs:
    - python=3.10

The following NEW packages will be INSTALLED:

  bzip2              pkgs/main/win-64::bzip2-1.0.8-he774522_0
  ca-certificates    pkgs/main/win-64::ca-certificates-2023.08.22-haa95532_0
  libffi             pkgs/main/win-64::libffi-3.4.4-hd77b12b_0
  openssl            pkgs/main/win-64::openssl-3.0.12-h2bbff1b_0
  pip                pkgs/main/win-64::pip-23.3.1-py310haa95532_0
  python             pkgs/main/win-64::python-3.10.13-he1021f5_0
  setuptools         pkgs/main/win-64::setuptools-68.0.0-py310haa95532_0
  sqlite             pkgs/main/win-64::sqlite-3.41.2-h2bbff1b_0
  tk                 pkgs/main/win-64::tk-8.6.12-h2bbff1b_0
  tzdata             pkgs/main/noarch::tzdata-2023c-h04d1e81_0
  vc                 pkgs/main/win-64::vc-14.2-h21ff451_1
  vs2015_runtime     pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2
  wheel              pkgs/main/win-64::wheel-0.41.2-py310haa95532_0
  xz                 pkgs/main/win-64::xz-5.4.5-h8cc25b3_0
  zlib               pkgs/main/win-64::zlib-1.2.13-h8cc25b3_0

Proceed ([y]/n)?

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate splatam
#
# To deactivate an active environment, use
#
#     $ conda deactivate
(base) C:\WINDOWS\system32>conda activate splatam
(splatam) C:\WINDOWS\system32>conda install -c "nvidia/label/cuda-11.6.0" cuda-toolkit
Channels:
 - nvidia/label/cuda-11.6.0
 - defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: C:\Anaconda3\envs\splatam

  added / updated specs:
    - cuda-toolkit

The following NEW packages will be INSTALLED:

  cuda-cccl          nvidia/label/cuda-11.6.0/win-64::cuda-cccl-11.6.55-0
  cuda-command-line~ nvidia/label/cuda-11.6.0/win-64::cuda-command-line-tools-11.6.0-0
  cuda-compiler      nvidia/label/cuda-11.6.0/win-64::cuda-compiler-11.6.0-0
  cuda-cudart        nvidia/label/cuda-11.6.0/win-64::cuda-cudart-11.6.55-0
  cuda-cudart-dev    nvidia/label/cuda-11.6.0/win-64::cuda-cudart-dev-11.6.55-0
  cuda-cuobjdump     nvidia/label/cuda-11.6.0/win-64::cuda-cuobjdump-11.6.55-0
  cuda-cupti         nvidia/label/cuda-11.6.0/win-64::cuda-cupti-11.6.55-0
  cuda-cuxxfilt      nvidia/label/cuda-11.6.0/win-64::cuda-cuxxfilt-11.6.55-0
  cuda-libraries     nvidia/label/cuda-11.6.0/win-64::cuda-libraries-11.6.0-0
  cuda-libraries-dev nvidia/label/cuda-11.6.0/win-64::cuda-libraries-dev-11.6.0-0
  cuda-memcheck      nvidia/label/cuda-11.6.0/win-64::cuda-memcheck-11.6.55-0
  cuda-nsight-compu~ nvidia/label/cuda-11.6.0/win-64::cuda-nsight-compute-11.6.0-0
  cuda-nvcc          nvidia/label/cuda-11.6.0/win-64::cuda-nvcc-11.6.55-0
  cuda-nvdisasm      nvidia/label/cuda-11.6.0/win-64::cuda-nvdisasm-11.6.55-0
  cuda-nvml-dev      nvidia/label/cuda-11.6.0/win-64::cuda-nvml-dev-11.6.55-0
  cuda-nvprof        nvidia/label/cuda-11.6.0/win-64::cuda-nvprof-11.6.55-0
  cuda-nvprune       nvidia/label/cuda-11.6.0/win-64::cuda-nvprune-11.6.55-0
  cuda-nvrtc         nvidia/label/cuda-11.6.0/win-64::cuda-nvrtc-11.6.55-0
  cuda-nvrtc-dev     nvidia/label/cuda-11.6.0/win-64::cuda-nvrtc-dev-11.6.55-0
  cuda-nvtx          nvidia/label/cuda-11.6.0/win-64::cuda-nvtx-11.6.55-0
  cuda-nvvp          nvidia/label/cuda-11.6.0/win-64::cuda-nvvp-11.6.58-0
  cuda-sanitizer-api nvidia/label/cuda-11.6.0/win-64::cuda-sanitizer-api-11.6.55-0
  cuda-toolkit       nvidia/label/cuda-11.6.0/win-64::cuda-toolkit-11.6.0-0
  cuda-tools         nvidia/label/cuda-11.6.0/win-64::cuda-tools-11.6.0-0
  cuda-visual-tools  nvidia/label/cuda-11.6.0/win-64::cuda-visual-tools-11.6.0-0
  libcublas          nvidia/label/cuda-11.6.0/win-64::libcublas-11.8.1.74-0
  libcublas-dev      nvidia/label/cuda-11.6.0/win-64::libcublas-dev-11.8.1.74-0
  libcufft           nvidia/label/cuda-11.6.0/win-64::libcufft-10.7.0.55-0
  libcufft-dev       nvidia/label/cuda-11.6.0/win-64::libcufft-dev-10.7.0.55-0
  libcurand          nvidia/label/cuda-11.6.0/win-64::libcurand-10.2.9.55-0
  libcurand-dev      nvidia/label/cuda-11.6.0/win-64::libcurand-dev-10.2.9.55-0
  libcusolver        nvidia/label/cuda-11.6.0/win-64::libcusolver-11.3.2.55-0
  libcusolver-dev    nvidia/label/cuda-11.6.0/win-64::libcusolver-dev-11.3.2.55-0
  libcusparse        nvidia/label/cuda-11.6.0/win-64::libcusparse-11.7.1.55-0
  libcusparse-dev    nvidia/label/cuda-11.6.0/win-64::libcusparse-dev-11.7.1.55-0
  libnpp             nvidia/label/cuda-11.6.0/win-64::libnpp-11.6.0.55-0
  libnpp-dev         nvidia/label/cuda-11.6.0/win-64::libnpp-dev-11.6.0.55-0
  libnvjpeg          nvidia/label/cuda-11.6.0/win-64::libnvjpeg-11.6.0.55-0
  libnvjpeg-dev      nvidia/label/cuda-11.6.0/win-64::libnvjpeg-dev-11.6.0.55-0
  nsight-compute     nvidia/label/cuda-11.6.0/win-64::nsight-compute-2022.1.0.12-0

Proceed ([y]/n)? y

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(splatam) C:\WINDOWS\system32>conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
Channels:
 - pytorch
 - conda-forge
 - defaults
 - nvidia/label/cuda-11.6.0
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: C:\Anaconda3\envs\splatam

  added / updated specs:
    - cudatoolkit=11.6
    - pytorch==1.12.1
    - torchaudio==0.12.1
    - torchvision==0.13.1

The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    brotli-python-1.0.9        |  py310h8a704f9_7         335 KB  conda-forge
    cudatoolkit-11.6.0         |      hc0ea762_10       965.1 MB  conda-forge
    freetype-2.10.4            |       h546665d_1         489 KB  conda-forge
    giflib-5.2.1               |       h8d14728_2          85 KB  conda-forge
    jbig-2.1                   |    h8d14728_2003          45 KB  conda-forge
    lerc-2.2.1                 |       h0e60522_0         133 KB  conda-forge
    libdeflate-1.7             |       h8ffe710_5          61 KB  conda-forge
    libtiff-4.3.0              |       h0c97f57_1         1.1 MB  conda-forge
    libuv-1.44.2               |       h8ffe710_0         362 KB  conda-forge
    libwebp-1.3.2              |       hbc33d0d_0          73 KB
    libwebp-base-1.3.2         |       h2bbff1b_0         306 KB
    numpy-1.23.2               |  py310h8a5b91a_0         6.4 MB  conda-forge
    openjpeg-2.4.0             |       hb211442_1         238 KB  conda-forge
    pillow-10.0.1              |  py310h045eedc_0         788 KB
    python_abi-3.10            |          2_cp310           4 KB  conda-forge
    torchvision-0.13.1         |      py310_cu116         7.4 MB  pytorch
    zstd-1.5.0                 |       h6255e5f_0        1004 KB  conda-forge
    ------------------------------------------------------------
                                           Total:       983.8 MB

The following NEW packages will be INSTALLED:

  blas               conda-forge/win-64::blas-2.120-mkl
  blas-devel         conda-forge/win-64::blas-devel-3.9.0-20_win64_mkl
  brotli-python      conda-forge/win-64::brotli-python-1.0.9-py310h8a704f9_7
  certifi            conda-forge/noarch::certifi-2023.11.17-pyhd8ed1ab_0
  charset-normalizer conda-forge/noarch::charset-normalizer-3.3.2-pyhd8ed1ab_0
  cudatoolkit        conda-forge/win-64::cudatoolkit-11.6.0-hc0ea762_10
  freetype           conda-forge/win-64::freetype-2.10.4-h546665d_1
  giflib             conda-forge/win-64::giflib-5.2.1-h8d14728_2
  idna               conda-forge/noarch::idna-3.6-pyhd8ed1ab_0
  intel-openmp       conda-forge/win-64::intel-openmp-2023.2.0-h57928b3_50497
  jbig               conda-forge/win-64::jbig-2.1-h8d14728_2003
  jpeg               conda-forge/win-64::jpeg-9e-h8ffe710_2
  lerc               conda-forge/win-64::lerc-2.2.1-h0e60522_0
  libblas            conda-forge/win-64::libblas-3.9.0-20_win64_mkl
  libcblas           conda-forge/win-64::libcblas-3.9.0-20_win64_mkl
  libdeflate         conda-forge/win-64::libdeflate-1.7-h8ffe710_5
  liblapack          conda-forge/win-64::liblapack-3.9.0-20_win64_mkl
  liblapacke         conda-forge/win-64::liblapacke-3.9.0-20_win64_mkl
  libpng             pkgs/main/win-64::libpng-1.6.39-h8cc25b3_0
  libtiff            conda-forge/win-64::libtiff-4.3.0-h0c97f57_1
  libuv              conda-forge/win-64::libuv-1.44.2-h8ffe710_0
  libwebp            pkgs/main/win-64::libwebp-1.3.2-hbc33d0d_0
  libwebp-base       pkgs/main/win-64::libwebp-base-1.3.2-h2bbff1b_0
  lz4-c              conda-forge/win-64::lz4-c-1.9.3-h8ffe710_1
  m2w64-gcc-libgfor~ conda-forge/win-64::m2w64-gcc-libgfortran-5.3.0-6
  m2w64-gcc-libs     conda-forge/win-64::m2w64-gcc-libs-5.3.0-7
  m2w64-gcc-libs-co~ conda-forge/win-64::m2w64-gcc-libs-core-5.3.0-7
  m2w64-gmp          conda-forge/win-64::m2w64-gmp-6.1.0-2
  m2w64-libwinpthre~ conda-forge/win-64::m2w64-libwinpthread-git-5.0.0.4634.697f757-2
  mkl                conda-forge/win-64::mkl-2023.2.0-h6a75c08_50497
  mkl-devel          conda-forge/win-64::mkl-devel-2023.2.0-h57928b3_50497
  mkl-include        conda-forge/win-64::mkl-include-2023.2.0-h6a75c08_50497
  msys2-conda-epoch  conda-forge/win-64::msys2-conda-epoch-20160418-1
  numpy              conda-forge/win-64::numpy-1.23.2-py310h8a5b91a_0
  openjpeg           conda-forge/win-64::openjpeg-2.4.0-hb211442_1
  pillow             pkgs/main/win-64::pillow-10.0.1-py310h045eedc_0
  pysocks            conda-forge/noarch::pysocks-1.7.1-pyh0701188_6
  python_abi         conda-forge/win-64::python_abi-3.10-2_cp310
  pytorch            pytorch/win-64::pytorch-1.12.1-py3.10_cuda11.6_cudnn8_0
  pytorch-mutex      pytorch/noarch::pytorch-mutex-1.0-cuda
  requests           conda-forge/noarch::requests-2.31.0-pyhd8ed1ab_0
  tbb                conda-forge/win-64::tbb-2021.5.0-h2d74725_1
  torchaudio         pytorch/win-64::torchaudio-0.12.1-py310_cu116
  torchvision        pytorch/win-64::torchvision-0.13.1-py310_cu116
  typing_extensions  conda-forge/noarch::typing_extensions-4.8.0-pyha770c72_0
  urllib3            conda-forge/noarch::urllib3-2.1.0-pyhd8ed1ab_0
  win_inet_pton      conda-forge/noarch::win_inet_pton-1.1.0-pyhd8ed1ab_6
  zstd               conda-forge/win-64::zstd-1.5.0-h6255e5f_0

The following packages will be UPDATED:

  ca-certificates    pkgs/main::ca-certificates-2023.08.22~ --> conda-forge::ca-certificates-2023.11.17-h56e8100_0

Proceed ([y]/n)? y

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: / "By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html"

done
(splatam) C:\WINDOWS\system32>pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

(splatam) C:\WINDOWS\system32>
(splatam) C:\WINDOWS\system32>cd "C:\Users\Nirmal\Documents\SplaTAM"

(splatam) C:\Users\Nirmal\Documents\SplaTAM>
(splatam) C:\Users\Nirmal\Documents\SplaTAM>pip install -r requirements.txt
Collecting git+https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git@cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 (from -r requirements.txt (line 15))
  Cloning https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git (to revision cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110) to c:\users\nirmal\appdata\local\temp\pip-req-build-_4q0dnct
  Running command git clone --filter=blob:none --quiet https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git 'C:\Users\Nirmal\AppData\Local\Temp\pip-req-build-_4q0dnct'
  Running command git rev-parse -q --verify 'sha^cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110'
  Running command git fetch -q https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110
  Resolved https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git to commit cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110
  Running command git submodule update --init --recursive -q
  Preparing metadata (setup.py) ... done
Collecting tqdm==4.65.0 (from -r requirements.txt (line 1))
  Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
     ---------------------------------------- 77.1/77.1 kB 474.1 kB/s eta 0:00:00
Requirement already satisfied: Pillow in c:\anaconda3\envs\splatam\lib\site-packages (from -r requirements.txt (line 2)) (10.0.1)
Collecting opencv-python (from -r requirements.txt (line 3))
  Downloading opencv_python-4.8.1.78-cp37-abi3-win_amd64.whl.metadata (20 kB)
Collecting imageio (from -r requirements.txt (line 4))
  Downloading imageio-2.33.0-py3-none-any.whl.metadata (4.9 kB)
Collecting matplotlib (from -r requirements.txt (line 5))
  Downloading matplotlib-3.8.2-cp310-cp310-win_amd64.whl.metadata (5.9 kB)
Collecting kornia (from -r requirements.txt (line 6))
  Downloading kornia-0.7.0-py2.py3-none-any.whl.metadata (12 kB)
Collecting natsort (from -r requirements.txt (line 7))
  Downloading natsort-8.4.0-py3-none-any.whl.metadata (21 kB)
Collecting pyyaml (from -r requirements.txt (line 8))
  Downloading PyYAML-6.0.1-cp310-cp310-win_amd64.whl.metadata (2.1 kB)
Collecting wandb (from -r requirements.txt (line 9))
  Downloading wandb-0.16.1-py3-none-any.whl.metadata (9.8 kB)
Collecting lpips (from -r requirements.txt (line 10))
  Downloading lpips-0.1.4-py3-none-any.whl (53 kB)
     ---------------------------------------- 53.8/53.8 kB 2.7 MB/s eta 0:00:00
Collecting open3d==0.16.0 (from -r requirements.txt (line 11))
  Downloading open3d-0.16.0-cp310-cp310-win_amd64.whl (62.3 MB)
     ---------------------------------------- 62.3/62.3 MB 1.3 MB/s eta 0:00:00
Collecting torchmetrics (from -r requirements.txt (line 12))
  Downloading torchmetrics-1.2.1-py3-none-any.whl.metadata (20 kB)
Collecting cyclonedds (from -r requirements.txt (line 13))
  Downloading cyclonedds-0.10.2-cp310-cp310-win_amd64.whl (2.3 MB)
     ---------------------------------------- 2.3/2.3 MB 1.6 MB/s eta 0:00:00
Collecting pytorch-msssim (from -r requirements.txt (line 14))
  Downloading pytorch_msssim-1.0.0-py3-none-any.whl.metadata (8.0 kB)
Collecting colorama (from tqdm==4.65.0->-r requirements.txt (line 1))
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Requirement already satisfied: numpy>=1.18.0 in c:\anaconda3\envs\splatam\lib\site-packages (from open3d==0.16.0->-r requirements.txt (line 11)) (1.23.2)
Collecting dash>=2.6.0 (from open3d==0.16.0->-r requirements.txt (line 11))
  Downloading dash-2.14.2-py3-none-any.whl.metadata (11 kB)
Collecting nbformat==5.5.0 (from open3d==0.16.0->-r requirements.txt (line 11))
  Downloading nbformat-5.5.0-py3-none-any.whl (75 kB)
     ---------------------------------------- 75.3/75.3 kB 1.4 MB/s eta 0:00:00
Collecting configargparse (from open3d==0.16.0->-r requirements.txt (line 11))
  Downloading ConfigArgParse-1.7-py3-none-any.whl.metadata (23 kB)
Collecting ipywidgets>=7.6.0 (from open3d==0.16.0->-r requirements.txt (line 11))
  Downloading ipywidgets-8.1.1-py3-none-any.whl.metadata (2.4 kB)
Collecting fastjsonschema (from nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading fastjsonschema-2.19.0-py3-none-any.whl.metadata (2.0 kB)
Collecting jsonschema>=2.6 (from nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading jsonschema-4.20.0-py3-none-any.whl.metadata (8.1 kB)
Collecting jupyter_core (from nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading jupyter_core-5.5.0-py3-none-any.whl.metadata (3.4 kB)
Collecting traitlets>=5.1 (from nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading traitlets-5.14.0-py3-none-any.whl.metadata (10 kB)
Collecting contourpy>=1.0.1 (from matplotlib->-r requirements.txt (line 5))
  Downloading contourpy-1.2.0-cp310-cp310-win_amd64.whl.metadata (5.8 kB)
Collecting cycler>=0.10 (from matplotlib->-r requirements.txt (line 5))
  Downloading cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib->-r requirements.txt (line 5))
  Downloading fonttools-4.46.0-cp310-cp310-win_amd64.whl.metadata (159 kB)
     ---------------------------------------- 159.4/159.4 kB 867.2 kB/s eta 0:00:00
Collecting kiwisolver>=1.3.1 (from matplotlib->-r requirements.txt (line 5))
  Downloading kiwisolver-1.4.5-cp310-cp310-win_amd64.whl.metadata (6.5 kB)
Collecting packaging>=20.0 (from matplotlib->-r requirements.txt (line 5))
  Downloading packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting pyparsing>=2.3.1 (from matplotlib->-r requirements.txt (line 5))
  Downloading pyparsing-3.1.1-py3-none-any.whl.metadata (5.1 kB)
Collecting python-dateutil>=2.7 (from matplotlib->-r requirements.txt (line 5))
  Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
     ---------------------------------------- 247.7/247.7 kB 1.7 MB/s eta 0:00:00
Requirement already satisfied: torch>=1.9.1 in c:\anaconda3\envs\splatam\lib\site-packages (from kornia->-r requirements.txt (line 6)) (1.12.1)
Requirement already satisfied: Click!=8.0.0,>=7.1 in c:\users\nirmal\appdata\roaming\python\python310\site-packages (from wandb->-r requirements.txt (line 9)) (7.1.2)
Collecting GitPython!=3.1.29,>=1.0.0 (from wandb->-r requirements.txt (line 9))
  Downloading GitPython-3.1.40-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: requests<3,>=2.0.0 in c:\anaconda3\envs\splatam\lib\site-packages (from wandb->-r requirements.txt (line 9)) (2.31.0)
Requirement already satisfied: psutil>=5.0.0 in c:\users\nirmal\appdata\roaming\python\python310\site-packages (from wandb->-r requirements.txt (line 9)) (5.8.0)
Collecting sentry-sdk>=1.0.0 (from wandb->-r requirements.txt (line 9))
  Downloading sentry_sdk-1.38.0-py2.py3-none-any.whl.metadata (9.7 kB)
Collecting docker-pycreds>=0.4.0 (from wandb->-r requirements.txt (line 9))
  Downloading docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Collecting setproctitle (from wandb->-r requirements.txt (line 9))
  Downloading setproctitle-1.3.3-cp310-cp310-win_amd64.whl.metadata (10 kB)
Requirement already satisfied: setuptools in c:\anaconda3\envs\splatam\lib\site-packages (from wandb->-r requirements.txt (line 9)) (68.0.0)
Collecting appdirs>=1.4.3 (from wandb->-r requirements.txt (line 9))
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting protobuf!=4.21.0,<5,>=3.19.0 (from wandb->-r requirements.txt (line 9))
  Downloading protobuf-4.25.1-cp310-abi3-win_amd64.whl.metadata (541 bytes)
Requirement already satisfied: torchvision>=0.2.1 in c:\anaconda3\envs\splatam\lib\site-packages (from lpips->-r requirements.txt (line 10)) (0.13.1)
Collecting scipy>=1.0.1 (from lpips->-r requirements.txt (line 10))
  Downloading scipy-1.11.4-cp310-cp310-win_amd64.whl.metadata (60 kB)
     ---------------------------------------- 60.4/60.4 kB 401.3 kB/s eta 0:00:00
Collecting lightning-utilities>=0.8.0 (from torchmetrics->-r requirements.txt (line 12))
  Downloading lightning_utilities-0.10.0-py3-none-any.whl.metadata (4.8 kB)
Collecting rich-click (from cyclonedds->-r requirements.txt (line 13))
  Downloading rich_click-1.7.2-py3-none-any.whl.metadata (22 kB)
Collecting Flask<3.1,>=1.0.4 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading flask-3.0.0-py3-none-any.whl.metadata (3.6 kB)
Collecting Werkzeug<3.1 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading werkzeug-3.0.1-py3-none-any.whl.metadata (4.1 kB)
Collecting plotly>=5.0.0 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading plotly-5.18.0-py3-none-any.whl.metadata (7.0 kB)
Collecting dash-html-components==2.0.0 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading dash_html_components-2.0.0-py3-none-any.whl (4.1 kB)
Collecting dash-core-components==2.0.0 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading dash_core_components-2.0.0-py3-none-any.whl (3.8 kB)
Collecting dash-table==5.0.0 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading dash_table-5.0.0-py3-none-any.whl (3.9 kB)
Collecting typing-extensions>=4.1.1 (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting retrying (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading retrying-1.3.4-py3-none-any.whl (11 kB)
Collecting ansi2html (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading ansi2html-1.8.0-py3-none-any.whl (16 kB)
Collecting nest-asyncio (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading nest_asyncio-1.5.8-py3-none-any.whl.metadata (2.8 kB)
Collecting importlib-metadata (from dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading importlib_metadata-7.0.0-py3-none-any.whl.metadata (4.9 kB)
Collecting six>=1.4.0 (from docker-pycreds>=0.4.0->wandb->-r requirements.txt (line 9))
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting gitdb<5,>=4.0.1 (from GitPython!=3.1.29,>=1.0.0->wandb->-r requirements.txt (line 9))
  Downloading gitdb-4.0.11-py3-none-any.whl.metadata (1.2 kB)
Collecting comm>=0.1.3 (from ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading comm-0.2.0-py3-none-any.whl.metadata (3.7 kB)
Collecting ipython>=6.1.0 (from ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading ipython-8.18.1-py3-none-any.whl.metadata (6.0 kB)
Collecting widgetsnbextension~=4.0.9 (from ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading widgetsnbextension-4.0.9-py3-none-any.whl.metadata (1.6 kB)
Collecting jupyterlab-widgets~=3.0.9 (from ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading jupyterlab_widgets-3.0.9-py3-none-any.whl.metadata (4.1 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\anaconda3\envs\splatam\lib\site-packages (from requests<3,>=2.0.0->wandb->-r requirements.txt (line 9)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\anaconda3\envs\splatam\lib\site-packages (from requests<3,>=2.0.0->wandb->-r requirements.txt (line 9)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\anaconda3\envs\splatam\lib\site-packages (from requests<3,>=2.0.0->wandb->-r requirements.txt (line 9)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\anaconda3\envs\splatam\lib\site-packages (from requests<3,>=2.0.0->wandb->-r requirements.txt (line 9)) (2023.11.17)
Collecting rich>=10.7.0 (from rich-click->cyclonedds->-r requirements.txt (line 13))
  Downloading rich-13.7.0-py3-none-any.whl.metadata (18 kB)
Collecting Jinja2>=3.1.2 (from Flask<3.1,>=1.0.4->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting itsdangerous>=2.1.2 (from Flask<3.1,>=1.0.4->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting Click!=8.0.0,>=7.1 (from wandb->-r requirements.txt (line 9))
  Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting blinker>=1.6.2 (from Flask<3.1,>=1.0.4->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading blinker-1.7.0-py3-none-any.whl.metadata (1.9 kB)
Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb->-r requirements.txt (line 9))
  Downloading smmap-5.0.1-py3-none-any.whl.metadata (4.3 kB)
Collecting decorator (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting jedi>=0.16 (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading jedi-0.19.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting matplotlib-inline (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading matplotlib_inline-0.1.6-py3-none-any.whl (9.4 kB)
Collecting prompt-toolkit<3.1.0,>=3.0.41 (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading prompt_toolkit-3.0.41-py3-none-any.whl.metadata (6.5 kB)
Collecting pygments>=2.4.0 (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading pygments-2.17.2-py3-none-any.whl.metadata (2.6 kB)
Collecting stack-data (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading stack_data-0.6.3-py3-none-any.whl.metadata (18 kB)
Collecting exceptiongroup (from ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading exceptiongroup-1.2.0-py3-none-any.whl.metadata (6.6 kB)
Collecting attrs>=22.2.0 (from jsonschema>=2.6->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading attrs-23.1.0-py3-none-any.whl (61 kB)
     ---------------------------------------- 61.2/61.2 kB 363.7 kB/s eta 0:00:00
Collecting jsonschema-specifications>=2023.03.6 (from jsonschema>=2.6->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading jsonschema_specifications-2023.11.2-py3-none-any.whl.metadata (3.0 kB)
Collecting referencing>=0.28.4 (from jsonschema>=2.6->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading referencing-0.32.0-py3-none-any.whl.metadata (2.7 kB)
Collecting rpds-py>=0.7.1 (from jsonschema>=2.6->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading rpds_py-0.13.2-cp310-none-win_amd64.whl.metadata (4.0 kB)
Collecting tenacity>=6.2.0 (from plotly>=5.0.0->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading tenacity-8.2.3-py3-none-any.whl.metadata (1.0 kB)
Collecting markdown-it-py>=2.2.0 (from rich>=10.7.0->rich-click->cyclonedds->-r requirements.txt (line 13))
  Downloading markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting MarkupSafe>=2.1.1 (from Werkzeug<3.1->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl.metadata (3.1 kB)
Collecting zipp>=0.5 (from importlib-metadata->dash>=2.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading zipp-3.17.0-py3-none-any.whl.metadata (3.7 kB)
Collecting platformdirs>=2.5 (from jupyter_core->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading platformdirs-4.1.0-py3-none-any.whl.metadata (11 kB)
Collecting pywin32>=300 (from jupyter_core->nbformat==5.5.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading pywin32-306-cp310-cp310-win_amd64.whl (9.2 MB)
     ---------------------------------------- 9.2/9.2 MB 1.1 MB/s eta 0:00:00
Collecting parso<0.9.0,>=0.8.3 (from jedi>=0.16->ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading parso-0.8.3-py2.py3-none-any.whl (100 kB)
     ---------------------------------------- 100.8/100.8 kB 580.4 kB/s eta 0:00:00
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.7.0->rich-click->cyclonedds->-r requirements.txt (line 13))
  Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting wcwidth (from prompt-toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading wcwidth-0.2.12-py2.py3-none-any.whl.metadata (14 kB)
Collecting executing>=1.2.0 (from stack-data->ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading executing-2.0.1-py2.py3-none-any.whl.metadata (9.0 kB)
Collecting asttokens>=2.1.0 (from stack-data->ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading asttokens-2.4.1-py2.py3-none-any.whl.metadata (5.2 kB)
Collecting pure-eval (from stack-data->ipython>=6.1.0->ipywidgets>=7.6.0->open3d==0.16.0->-r requirements.txt (line 11))
  Downloading pure_eval-0.2.2-py3-none-any.whl (11 kB)
Downloading opencv_python-4.8.1.78-cp37-abi3-win_amd64.whl (38.1 MB)
   ---------------------------------------- 38.1/38.1 MB 1.2 MB/s eta 0:00:00
Downloading imageio-2.33.0-py3-none-any.whl (313 kB)
   ---------------------------------------- 313.3/313.3 kB 1.9 MB/s eta 0:00:00
Downloading matplotlib-3.8.2-cp310-cp310-win_amd64.whl (7.6 MB)
   ---------------------------------------- 7.6/7.6 MB 1.4 MB/s eta 0:00:00
Downloading kornia-0.7.0-py2.py3-none-any.whl (705 kB)
   ---------------------------------------- 705.7/705.7 kB 781.4 kB/s eta 0:00:00
Downloading natsort-8.4.0-py3-none-any.whl (38 kB)
Downloading PyYAML-6.0.1-cp310-cp310-win_amd64.whl (145 kB)
   ---------------------------------------- 145.3/145.3 kB 1.7 MB/s eta 0:00:00
Downloading wandb-0.16.1-py3-none-any.whl (2.1 MB)
   ---------------------------------------- 2.1/2.1 MB 1.5 MB/s eta 0:00:00
Downloading torchmetrics-1.2.1-py3-none-any.whl (806 kB)
   ---------------------------------------- 806.1/806.1 kB 2.0 MB/s eta 0:00:00
Downloading pytorch_msssim-1.0.0-py3-none-any.whl (7.7 kB)
Downloading contourpy-1.2.0-cp310-cp310-win_amd64.whl (186 kB)
   ---------------------------------------- 186.7/186.7 kB 2.3 MB/s eta 0:00:00
Downloading cycler-0.12.1-py3-none-any.whl (8.3 kB)
Downloading dash-2.14.2-py3-none-any.whl (10.2 MB)
   ---------------------------------------- 10.2/10.2 MB 1.3 MB/s eta 0:00:00
Downloading fonttools-4.46.0-cp310-cp310-win_amd64.whl (2.2 MB)
   ---------------------------------------- 2.2/2.2 MB 1.1 MB/s eta 0:00:00
Downloading GitPython-3.1.40-py3-none-any.whl (190 kB)
   ---------------------------------------- 190.6/190.6 kB 1.4 MB/s eta 0:00:00
Downloading ipywidgets-8.1.1-py3-none-any.whl (139 kB)
   ---------------------------------------- 139.4/139.4 kB 2.1 MB/s eta 0:00:00
Downloading kiwisolver-1.4.5-cp310-cp310-win_amd64.whl (56 kB)
   ---------------------------------------- 56.1/56.1 kB 1.4 MB/s eta 0:00:00
Downloading lightning_utilities-0.10.0-py3-none-any.whl (24 kB)
Downloading packaging-23.2-py3-none-any.whl (53 kB)
   ---------------------------------------- 53.0/53.0 kB 1.4 MB/s eta 0:00:00
Downloading protobuf-4.25.1-cp310-abi3-win_amd64.whl (413 kB)
   ---------------------------------------- 413.4/413.4 kB 1.8 MB/s eta 0:00:00
Downloading pyparsing-3.1.1-py3-none-any.whl (103 kB)
   ---------------------------------------- 103.1/103.1 kB 2.0 MB/s eta 0:00:00
Downloading scipy-1.11.4-cp310-cp310-win_amd64.whl (44.1 MB)
   ---------------------------------------- 44.1/44.1 MB 1.3 MB/s eta 0:00:00
Downloading sentry_sdk-1.38.0-py2.py3-none-any.whl (252 kB)
   ---------------------------------------- 252.8/252.8 kB 1.5 MB/s eta 0:00:00
Downloading ConfigArgParse-1.7-py3-none-any.whl (25 kB)
Downloading rich_click-1.7.2-py3-none-any.whl (32 kB)
Downloading setproctitle-1.3.3-cp310-cp310-win_amd64.whl (11 kB)
Downloading comm-0.2.0-py3-none-any.whl (7.0 kB)
Downloading flask-3.0.0-py3-none-any.whl (99 kB)
   ---------------------------------------- 99.7/99.7 kB 952.0 kB/s eta 0:00:00
Downloading click-8.1.7-py3-none-any.whl (97 kB)
   ---------------------------------------- 97.9/97.9 kB 1.9 MB/s eta 0:00:00
Downloading gitdb-4.0.11-py3-none-any.whl (62 kB)
   ---------------------------------------- 62.7/62.7 kB 3.5 MB/s eta 0:00:00
Downloading ipython-8.18.1-py3-none-any.whl (808 kB)
   ---------------------------------------- 808.2/808.2 kB 1.6 MB/s eta 0:00:00
Downloading jsonschema-4.20.0-py3-none-any.whl (84 kB)
   ---------------------------------------- 84.7/84.7 kB 1.6 MB/s eta 0:00:00
Downloading jupyterlab_widgets-3.0.9-py3-none-any.whl (214 kB)
   ---------------------------------------- 214.9/214.9 kB 1.3 MB/s eta 0:00:00
Downloading plotly-5.18.0-py3-none-any.whl (15.6 MB)
   ---------------------------------------- 15.6/15.6 MB 1.3 MB/s eta 0:00:00
Downloading rich-13.7.0-py3-none-any.whl (240 kB)
   ---------------------------------------- 240.6/240.6 kB 2.1 MB/s eta 0:00:00
Downloading traitlets-5.14.0-py3-none-any.whl (85 kB)
   ---------------------------------------- 85.2/85.2 kB 1.2 MB/s eta 0:00:00
Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Downloading werkzeug-3.0.1-py3-none-any.whl (226 kB)
   ---------------------------------------- 226.7/226.7 kB 2.0 MB/s eta 0:00:00
Downloading widgetsnbextension-4.0.9-py3-none-any.whl (2.3 MB)
   ---------------------------------------- 2.3/2.3 MB 1.3 MB/s eta 0:00:00
Downloading fastjsonschema-2.19.0-py3-none-any.whl (23 kB)
Downloading importlib_metadata-7.0.0-py3-none-any.whl (23 kB)
Downloading jupyter_core-5.5.0-py3-none-any.whl (28 kB)
Downloading nest_asyncio-1.5.8-py3-none-any.whl (5.3 kB)
Downloading blinker-1.7.0-py3-none-any.whl (13 kB)
Downloading jedi-0.19.1-py2.py3-none-any.whl (1.6 MB)
   ---------------------------------------- 1.6/1.6 MB 1.6 MB/s eta 0:00:00
Downloading jsonschema_specifications-2023.11.2-py3-none-any.whl (17 kB)
Downloading markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
   ---------------------------------------- 87.5/87.5 kB 2.5 MB/s eta 0:00:00
Downloading MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Downloading platformdirs-4.1.0-py3-none-any.whl (17 kB)
Downloading prompt_toolkit-3.0.41-py3-none-any.whl (385 kB)
   ---------------------------------------- 385.5/385.5 kB 1.2 MB/s eta 0:00:00
Downloading pygments-2.17.2-py3-none-any.whl (1.2 MB)
   ---------------------------------------- 1.2/1.2 MB 859.9 kB/s eta 0:00:00
Downloading referencing-0.32.0-py3-none-any.whl (26 kB)
Downloading rpds_py-0.13.2-cp310-none-win_amd64.whl (188 kB)
   ---------------------------------------- 188.9/188.9 kB 1.4 MB/s eta 0:00:00
Downloading smmap-5.0.1-py3-none-any.whl (24 kB)
Downloading tenacity-8.2.3-py3-none-any.whl (24 kB)
Downloading zipp-3.17.0-py3-none-any.whl (7.4 kB)
Downloading exceptiongroup-1.2.0-py3-none-any.whl (16 kB)
Downloading stack_data-0.6.3-py3-none-any.whl (24 kB)
Downloading asttokens-2.4.1-py2.py3-none-any.whl (27 kB)
Downloading executing-2.0.1-py2.py3-none-any.whl (24 kB)
Downloading wcwidth-0.2.12-py2.py3-none-any.whl (34 kB)
Building wheels for collected packages: diff-gaussian-rasterization
  Building wheel for diff-gaussian-rasterization (setup.py) ... done
  Created wheel for diff-gaussian-rasterization: filename=diff_gaussian_rasterization-0.0.0-cp310-cp310-win_amd64.whl size=323784 sha256=c60080785f3de0f801684ae41ddb0359833d3c1eb37cb67b304e48b99497a16f
  Stored in directory: c:\users\nirmal\appdata\local\pip\cache\wheels\7f\06\a4\a99846e65c144ce0493fdc05cecd76e841ecadd5385d3db393
Successfully built diff-gaussian-rasterization
Installing collected packages: wcwidth, pywin32, pure-eval, fastjsonschema, diff-gaussian-rasterization, dash-table, dash-html-components, dash-core-components, appdirs, zipp, widgetsnbextension, typing-extensions, traitlets, tenacity, smmap, six, setproctitle, sentry-sdk, scipy, rpds-py, pyyaml, pyparsing, pygments, protobuf, prompt-toolkit, platformdirs, parso, packaging, opencv-python, nest-asyncio, natsort, mdurl, MarkupSafe, kiwisolver, jupyterlab-widgets, itsdangerous, imageio, fonttools, executing, exceptiongroup, decorator, cycler, contourpy, configargparse, colorama, blinker, attrs, ansi2html, Werkzeug, tqdm, retrying, referencing, python-dateutil, plotly, matplotlib-inline, markdown-it-py, lightning-utilities, jupyter_core, Jinja2, jedi, importlib-metadata, gitdb, docker-pycreds, comm, Click, asttokens, torchmetrics, stack-data, rich, pytorch-msssim, matplotlib, kornia, jsonschema-specifications, GitPython, Flask, wandb, rich-click, lpips, jsonschema, ipython, dash, nbformat, ipywidgets, cyclonedds, open3d
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.10.0.2
    Uninstalling typing-extensions-3.10.0.2:
      Successfully uninstalled typing-extensions-3.10.0.2
  Attempting uninstall: Click
    Found existing installation: click 7.1.2
    Uninstalling click-7.1.2:
      Successfully uninstalled click-7.1.2
  Attempting uninstall: rich
    Found existing installation: rich 9.11.1
    Uninstalling rich-9.11.1:
      Successfully uninstalled rich-9.11.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tea-console 0.0.6 requires rich~=9.11.0, but you have rich 13.7.0 which is incompatible.
typer 0.3.2 requires click<7.2.0,>=7.1.1, but you have click 8.1.7 which is incompatible.
Successfully installed Click-8.1.7 Flask-3.0.0 GitPython-3.1.40 Jinja2-3.1.2 MarkupSafe-2.1.3 Werkzeug-3.0.1 ansi2html-1.8.0 appdirs-1.4.4 asttokens-2.4.1 attrs-23.1.0 blinker-1.7.0 colorama-0.4.6 comm-0.2.0 configargparse-1.7 contourpy-1.2.0 cycler-0.12.1 cyclonedds-0.10.2 dash-2.14.2 dash-core-components-2.0.0 dash-html-components-2.0.0 dash-table-5.0.0 decorator-5.1.1 diff-gaussian-rasterization-0.0.0 docker-pycreds-0.4.0 exceptiongroup-1.2.0 executing-2.0.1 fastjsonschema-2.19.0 fonttools-4.46.0 gitdb-4.0.11 imageio-2.33.0 importlib-metadata-7.0.0 ipython-8.18.1 ipywidgets-8.1.1 itsdangerous-2.1.2 jedi-0.19.1 jsonschema-4.20.0 jsonschema-specifications-2023.11.2 jupyter_core-5.5.0 jupyterlab-widgets-3.0.9 kiwisolver-1.4.5 kornia-0.7.0 lightning-utilities-0.10.0 lpips-0.1.4 markdown-it-py-3.0.0 matplotlib-3.8.2 matplotlib-inline-0.1.6 mdurl-0.1.2 natsort-8.4.0 nbformat-5.5.0 nest-asyncio-1.5.8 open3d-0.16.0 opencv-python-4.8.1.78 packaging-23.2 parso-0.8.3 platformdirs-4.1.0 plotly-5.18.0 prompt-toolkit-3.0.41 protobuf-4.25.1 pure-eval-0.2.2 pygments-2.17.2 pyparsing-3.1.1 python-dateutil-2.8.2 pytorch-msssim-1.0.0 pywin32-306 pyyaml-6.0.1 referencing-0.32.0 retrying-1.3.4 rich-13.7.0 rich-click-1.7.2 rpds-py-0.13.2 scipy-1.11.4 sentry-sdk-1.38.0 setproctitle-1.3.3 six-1.16.0 smmap-5.0.1 stack-data-0.6.3 tenacity-8.2.3 torchmetrics-1.2.1 tqdm-4.65.0 traitlets-5.14.0 typing-extensions-4.8.0 wandb-0.16.1 wcwidth-0.2.12 widgetsnbextension-4.0.9 zipp-3.17.0

RealSense Demo

Hey, can you please provide a demo on how to use it with Intel Realsense D435i camera?

Tips for taking frames with iPhone

Hi, thank you for your support.

Is there any tips for taking frames with iPhone? (overlapping, angle.. etc)
I ran your work with online mode \w iPhone 13 pro but the result is not very good enough.

Thank you

Regarding depth backpropagation

Hi,
First of all, thanks a lot for providing this repository open source!

My question should be fairly easy to answer: In the paper (and code) you use depth supervision during mapping and tracking and I have confirmed that this works as intended, however, when looking at the submodule library https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 it explicitly says "Note that the backward pass for the depth has not been implemented, so it won't work for training with depth ground-truth". This makes me confused as to why the code still seems to implement the depth supervision.

Would be very grateful for insights on this!

Cheers,
Erik

Official 3DGS based Densification for SLAM

I really appreciate your wonderful work! I find you have write densify function (densify method of vanilla 3D Gaussian Slplatting) in utils/slam_rextenel.py. Have you test this function instead of add_new_gaussians based on depth image. How it works? I will be appreciate if you can reply.

NeRFCapture Connection Issues

Regarding App Connection: I've found that some networks don't allow the two devices (phone & PC) to connect (for example, CMU's WiFi). However, using a router with CMU's internet worked. Also, I've found that it's good to clean out the app from the background when connecting it to the Python script.

I've seen unexplained connection issues with the NeRFCapture App. (jc211/NeRFCapture#11)

We will try to release a better app and script to replace this for better support and to enable video streaming soon.

gradslam_dataconfig.yaml

Hi, I want to express my appreciation for the exceptional work you've done. It's truly commendable.

I have a query regarding the availability of a configuration file. Could you kindly provide the configuration file or guide me on how to access it?

Thank you for your time and assistance.

/home/airlab/nkeetha/4d/data/Replica/gradslam_dataconfig.yaml

Get sequences **8b5caf3398** and **b20a261fdf**

How did you obtain the two data sequences 8b5caf3398 and b20a261fdf? Currently, in ScanNet v2, I only see names similar to scene0000_00 and haven't found sequences in the format of 8b5caf3398 or b20a261fdf. Could you please tell me where I might have overlooked or made a mistake? Thankes!

Could not build wheels for diff-gaussian-rasterization on Windows

I have been encountering the 'Could not build wheels for diff-gaussian-rasterization' issue. I tried:

  • Installing in the normal order.
  • Installing according to environment.yml.

The result was the same in both cases.

So, I attempted to directly install from https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth by using setup.py . Here is the error message I encountered:

E:\anaconda3\envs\splatam\lib\site-packages\torch\include\pybind11\cast.h(1429): error: too few arguments for template template parameter 'Tuple'
detected during the instantiation of class 'pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]'
(1507): here

E:\anaconda3\envs\splatam\lib\site-packages\torch\include\pybind11\cast.h(1503): error: too few arguments for template template parameter 'Tuple'
detected during the instantiation of class 'pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]'
(1507): here

2 errors detected in the compilation of 'rasterize_points.cu'.
rasterize_points.cu
error: command 'E:\anaconda3\envs\splatam\bin\nvcc.exe' failed with exit code 1

How can I resolve this issue?

PLY Output Format & Isotropic Gaussians

Thanks for sharing the great project! I have run the code for iPhone demo and it worked smooth and nice.

However, I have two questions regarding the output format:

  1. The current output of the project is a numpy zipped file of the combined 3DGS parameters + trajectory params.npz, not directly compatible to .ply files used in many Gaussian splatting works.

For this, I just made a converting code below, which produces gsplat.ply file compatible for the most of gsplat applications. Can you consider adding this kind of code in your project?

import numpy as np
from plyfile import PlyElement, PlyData # Requires plyfile==0.8.1


def construct_list_of_attributes(f_dc, scale, rotation):
    l = ['x', 'y', 'z', 'nx', 'ny', 'nz']
    for i in range(f_dc.shape[1]):
        l.append('f_dc_{}'.format(i))
    l.append('opacity')
    for i in range(scale.shape[1]):
        l.append('scale_{}'.format(i))
    for i in range(rotation.shape[1]):
        l.append('rot_{}'.format(i))
    return l


def convert(src, dest):
    params = np.load(src)

    xyz = params['means3D']
    normals = np.zeros_like(xyz)
    f_dc = params['rgb_colors']
    # f_rest = np.zeros_like(f_dc)
    opacities = params['logit_opacities']
    scale = params['log_scales'].repeat(3, axis=-1)
    rotation = params['unnorm_rotations']

    dtype_full = [(attribute, 'f4') for attribute in construct_list_of_attributes(f_dc, scale, rotation)]

    elements = np.empty(xyz.shape[0], dtype=dtype_full)
    attributes = np.concatenate((xyz, normals, f_dc, opacities, scale, rotation), axis=1)
    elements[:] = list(map(tuple, attributes))
    el = PlyElement.describe(elements, 'vertex')
    PlyData([el]).write(dest)


if __name__ == '__main__':
    src = 'params.npz'
    dest = 'gsplat.ply'
    convert(src, dest)
  1. When I opened the params.npz, it has isotropic scaling of each Gaussians. Can I get anisotropic Gaussians instead? Does it result in worse results?

Render RGB and depth in one shot

Hi, I noticed the current code renders RGB and depth separately using different colors_precomp, I am curious whether it is possible to render them in just one shot. If so, what modifications would be needed, e.g. for the forward and backward passes? Thanks for your help.

slow evaluation speed

Hi, I find that the training time is very fast but it seems to be stuck during the final evaluation step. Could you please tell me why? Thanks!
Evaluating Final Parameters ...
7%|██▋ | 139/2000 [42:48<7:12:19, 13.94s/it]

Confusion about 'transformed_params2depthplussilhouette' function

Thank you for sharing the code. I am confused about the transformed_params2depthplussilhouette function here.

def transformed_params2depthplussilhouette(params, w2c, transformed_pts):
    rendervar = {
        'means3D': transformed_pts,
        'colors_precomp': get_depth_and_silhouette(transformed_pts, w2c),
        'rotations': F.normalize(params['unnorm_rotations']),
        'opacities': torch.sigmoid(params['logit_opacities']),
        'scales': torch.exp(torch.tile(params['log_scales'], (1, 3))),
        'means2D': torch.zeros_like(params['means3D'], requires_grad=True, device="cuda") + 0
    }
    return rendervar

Since in the transform_to_frame function, transformed_pts are the points transformed into current camera frame. Why do we need to transform it again in the code get_depth_and_silhouette(transformed_pts, w2c) ? Shouldn't we just use the transformed_pts to compute the depth ?

Post SplaTAM Optimization & Gaussian Splatting for iPhone Data

After reconstructing an iphone demo using both the online and offline version. I have not yet managed to get

python scripts/post_splatam_opt.pt configs/iphone/post_splatam_opt.py

OR

python scripts/gaussian_splatting.py configs/iphone/gaussian_splatting.py

working

For post_splatam_opt

I get the following error

Loading Params
dict_keys(['means3D', 'rgb_colors', 'unnorm_rotations', 'logit_opacities', 'log_scales', 'cam_unnorm_rots', 'cam_trans', 'timestep', 'keyframe_time_indices'])
/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py:204: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  variables['timestep'] = torch.tensor(params['timestep']).cuda().float()
Traceback (most recent call last):
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py", line 596, in <module>
    rgbd_slam(experiment.config)
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py", line 420, in rgbd_slam
    params, variables, optimizer, intrinsics, w2c, cam = initialize_first_timestep_from_ckpt(ckpt_path,mapping_dataset, num_frames,
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py", line 206, in initialize_first_timestep_from_ckpt
    optimizer = initialize_optimizer(params, lrs_dict)
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py", line 164, in initialize_optimizer
    param_groups = [{'params': [v], 'name': k, 'lr': lrs[k]} for k, v in params.items()]
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/post_splatam_opt.py", line 164, in <listcomp>
    param_groups = [{'params': [v], 'name': k, 'lr': lrs[k]} for k, v in params.items()]
KeyError: 'keyframe_time_indices'

For Gaussian Splatting

I get

Loading Dataset ...
Traceback (most recent call last):
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/gaussian_splatting.py", line 626, in <module>
    rgbd_slam(experiment.config)
  File "/home/pablo/0Dev/repos/SplaTAM/scripts/gaussian_splatting.py", line 373, in rgbd_slam
    desired_height=dataset_config["desired_image_height_init"],
KeyError: 'desired_image_height_init'

I have tried replacing desired_image_height_init with just desired_image_height but there seems to be a bunch of incorrect keys inside the config for gaussian_splatting.py and its not clear that this version is working

Memory cost for training

Hello, I appreciate your outstanding work. I would like to inquire about the GPU memory requirements associated with training/SLAM. Specifically, I'm interested in understanding the amount of memory needed for conducting experiments on the Replica dataset. Thank you!

docker support

Hello,
Thank you for the nice work.
I was wondering if you have any docker support for this project.
conda env create takes toooooo much time...

Thank you in advance for your response :)

Query about Pose Forward Propagation

Hi, great work! Question about initialize camera pose, the minus or plus operation of quaternions can not get the relative rotation between poses, use multiply of quaternion or rotation matrix is a proper way?

In function of initialize_camera_pose, line 420 if splatam.py

prev_rot1 = F.normalize(params['cam_unnorm_rots'][..., curr_time_idx-1].detach())
prev_rot2 = F.normalize(params['cam_unnorm_rots'][..., curr_time_idx-2].detach())
new_rot = F.normalize(prev_rot1 + (prev_rot1 - prev_rot2))
params['cam_unnorm_rots'][..., curr_time_idx] = new_rot.detach()

wandb error

When I was running "python scripts/splatam.py configs/tum/splatam.py", I encountered the following error.
Loaded Config:
{'workdir': './experiments/TUM', 'run_name': 'freiburg1_desk_seed0', 'seed': 0, 'primary_device': 'cuda:0', 'map_every': 1, 'keyframe_every': 5, 'mapping_window_size': 20, 'report_global_progress_every': 500, 'eval_every': 500, 'scene_radius_depth_ratio': 2, 'mean_sq_dist_method': 'projective', 'report_iter_progress': False, 'load_checkpoint': False, 'checkpoint_time_idx': 0, 'save_checkpoints': False, 'checkpoint_interval': 100, 'use_wandb': True, 'wandb': {'entity': 'theairlab', 'project': 'SplaTAM', 'group': 'TUM', 'name': 'freiburg1_desk_seed0', 'save_qual': False, 'eval_save_qual': True}, 'data': {'basedir': './data/TUM_RGBD', 'gradslam_data_cfg': './configs/data/TUM/freiburg1_desk.yaml', 'sequence': 'rgbd_dataset_freiburg1_desk', 'desired_image_height': 480, 'desired_image_width': 640, 'start': 0, 'end': -1, 'stride': 1, 'num_frames': -1}, 'tracking': {'use_gt_poses': False, 'forward_prop': True, 'num_iters': 200, 'use_sil_for_loss': True, 'sil_thres': 0.99, 'use_l1': True, 'ignore_outlier_depth_loss': False, 'use_uncertainty_for_loss_mask': False, 'use_uncertainty_for_loss': False, 'use_chamfer': False, 'loss_weights': {'im': 0.5, 'depth': 1.0}, 'lrs': {'means3D': 0.0, 'rgb_colors': 0.0, 'unnorm_rotations': 0.0, 'logit_opacities': 0.0, 'log_scales': 0.0, 'cam_unnorm_rots': 0.002, 'cam_trans': 0.002}, 'use_depth_loss_thres': False, 'depth_loss_thres': 100000, 'visualize_tracking_loss': False}, 'mapping': {'num_iters': 30, 'add_new_gaussians': True, 'sil_thres': 0.5, 'use_l1': True, 'use_sil_for_loss': False, 'ignore_outlier_depth_loss': False, 'use_uncertainty_for_loss_mask': False, 'use_uncertainty_for_loss': False, 'use_chamfer': False, 'loss_weights': {'im': 0.5, 'depth': 1.0}, 'lrs': {'means3D': 0.0001, 'rgb_colors': 0.0025, 'unnorm_rotations': 0.001, 'logit_opacities': 0.05, 'log_scales': 0.001, 'cam_unnorm_rots': 0.0, 'cam_trans': 0.0}, 'prune_gaussians': True, 'pruning_dict': {'start_after': 0, 'remove_big_after': 0, 'stop_after': 20, 'prune_every': 20, 'removal_opacity_threshold': 0.005, 'final_removal_opacity_threshold': 0.005, 'reset_opacities': False, 'reset_opacities_every': 500}, 'use_gaussian_splatting_densification': False, 'densify_dict': {'start_after': 500, 'remove_big_after': 3000, 'stop_after': 5000, 'densify_every': 100, 'grad_thresh': 0.0002, 'num_to_split_into': 2, 'removal_opacity_threshold': 0.005, 'final_removal_opacity_threshold': 0.005, 'reset_opacities_every': 3000}}, 'viz': {'render_mode': 'color', 'offset_first_viz_cam': True, 'show_sil': False, 'visualize_cams': True, 'viz_w': 600, 'viz_h': 340, 'viz_near': 0.01, 'viz_far': 100.0, 'view_scale': 2, 'viz_fps': 5, 'enter_interactive_post_online': False}}
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:
wandb: Appending key for api.wandb.ai to your netrc file: /home/zrh/.netrc
wandb: ERROR Error while calling W&B API: project not found (<Response [404]>)
Problem at: /home/zrh/project/SplaTAM/scripts/splatam.py 472 rgbd_slam
wandb: ERROR It appears that you do not have permission to access the requested resource. Please reach out to the project owner to grant you access. If you have the correct permissions, verify that there are no issues with your networking setup.(Error 404: Not Found)
Traceback (most recent call last):
File "/home/zrh/project/SplaTAM/scripts/splatam.py", line 1007, in
rgbd_slam(experiment.config)
File "/home/zrh/project/SplaTAM/scripts/splatam.py", line 472, in rgbd_slam
wandb_run = wandb.init(project=config['wandb']['project'],
File "/home/zrh/source/anaconda3/envs/test/lib/python3.10/site-packages/wandb/sdk/wandb_init.py", line 1189, in init
raise e
File "/home/zrh/source/anaconda3/envs/test/lib/python3.10/site-packages/wandb/sdk/wandb_init.py", line 1170, in init
run = wi.init()
File "/home/zrh/source/anaconda3/envs/test/lib/python3.10/site-packages/wandb/sdk/wandb_init.py", line 781, in init
raise error
wandb.errors.CommError: It appears that you do not have permission to access the requested resource. Please reach out to the project owner to grant you access. If you have the correct permissions, verify that there are no issues with your networking setup.(Error 404: Not Found)

Queries regarding Paper Equations

I have some questions about your paper

  1. In the "Method" section, µ represents the center position and it has 3 parameters (possibly x, y, z). When projecting the Gaussian into 2D pixel space, Et and µ are multiplied together. Does this mean Et is a 3 x 3 matrix containing rotation and translation? Or µ is transformed to homogeneous coordinate?

  2. Did you not use Spherical Harmonics (SH) for color representation? (Was optimization done using just the RGB 3 channels?)

  3. Is the Gaussian always isotropic because the radius r is represented in only 1 channel?

Query regarding Ground Truth & 3D Convention

Thank you and your team for the excellent code.

  1. How is the ground truth obtained for your dataset?

2)The generated point cloud is based on what coordinates?

Looking forward to your reply!

Custom Dataset (Duplicate)

I am a beginner, I can get results on these two dataset, Replica and TUM-RGBD. But when I created a new dataset by d435, I couldn't run the SplaTAM. I met many config errors. Replica and TUM-RGBD had different configs, so I didn't know which one to refer to.
Has anyone come up with a result by using yourself dataset? Please share a tutorial, thanks!!!

RealSense Support

Hi, I wonder how to use my own RealSense data in this project. I use your realsense dataloader in datasets/gradslam_datasets/realsense.py. There are total 1777 frames, and each frame resolution is 1280*720. The config file is like below. And I initialize the camera pose with P = torch.tensor([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]).float(), the num is equal the number of frames

image image After I execute `python3 scripts/splatam.py configs/realsense/splatam.py`, the speed is super slow than Replica dataset. And after a while, it shows an error that cuda is out of memory. However, I use Tesla V100 which has 32G memory. Is the memory enough? And I want it to execute faster like Replica dataset. What can I do? Thank you! Here is the configs/realsense/splatam.py file [splatam.zip](https://github.com/spla-tam/SplaTAM/files/13611133/splatam.zip)

How to obtain Mesh or OBJ file?

Hi, I am very excited about your work. I have used your algorithm to reconstruct my scene. I have get the params.npz file. Now I want to use your reconstructed scene model in our project. We need OBJ file format to import the model. We want to import the scene for our robot simulation system. Could you please tell me how to convert the params.npz file to OBJ file? Thanks very much! It is just the point cloud. How to convert the point cloud to mesh?

release time for SplaTAM V2

Hi, thanks for your awesome work! I am wondering about the release time of SplaTAM V2 and the difference between SplaTAM V2 and SplaTAM V1. Thanks!

open3d 0.17.0 visulization problem

I found that when I use open3d=0.17.0, the visualization scripts seems to return a wrong results. The pointclouds is not complete.
I wonder know why and have you tried this before?

Setting up conda environment takes forever

First of all, thank you for this fantastic work! I really like to test it out with my own data. But it takes forever to set up the environment on my workbench.

This is the detailed setting of my workbench:
Platform: GCP
Hardware: 1 NVIDIA L4 GPU, 4 vCPUs, 16GB RAM
CUDA: 11.3 or 11.8

I noticed that the environment.yml said the code requires 'cudatoolkit=11.6'. I wonder if this is why it is stuck in setting up the environment.

Any suggestion is appreciated!!

Windows Support

Do you know if I can run this on Windows or do I need WSL? I tried to run the demo using Git Bash, but unfortunately it isn't working.

thx for your great work!

Has anyone come up with a result by using d435?

when I created a new dataset by d435, I couldn't run the SplaTAM. I met many config errors.
Has anyone come up with a result by using yourself dataset? Please share a tutorial, thanks!!!

Git Clone Issue on Windows

Hello

I think I found an issue related to the submodules present in this repository.

Issue.

  • git clone with --recursive option fails
    Possible fix.
  • I think the issue is cause by setting the repository using SSH. The issue solves when I change the repository to HTTPS.
    By editing .gitmodules and then replace line:
    url = [email protected]:JonathonLuiten/diff-gaussian-rasterization-w-depth.git
    to:
    url = https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth.git

Then I reload the submodules using : git submodule update --init --recursive

But even so, I still issues with the submodules, during conda env install. 😑

KeyError: 'ignore_outlier_depth_loss'

When run with tum dataset, it shows me error like this

-Command

python scripts/splatam.py configs/tum/splatam.py

-Error
Screenshot from 2023-12-08 17-58-19

Maybe missing the keywords in configs/tum/splatam.py

Sensor Compatibility & iPhone (ToF) Depth

Hi Authors,

Awesome paper and algorithm, I was just wondering if the algorithm is able to run with a RGB camera and a ToF sensor. It says in the Readme that it is compatible with the new iphones which has a tof sensor. But I don't have a iphone and I wanted to run it on a drone with tof sensor. What changes would I need to do?

cyclonedds.core.DDSException

I am running the script bash bash_scripts/nerfcapture2dataset.bash configs/iphone/dataset.py,but got following error, what is the reason? By the way, my iPhone is iphone13, can I use it to run this script?
sysctl: unknown oid 'net.core.rmem_max'
sysctl: unknown oid 'net.core.wmem_max'
1703056843.335096 [0] 42048241: config: //CycloneDDS/Domain/Internal/MinimumSocketReceiveBufferSize: setting moved to //CycloneDDS/Domain/Internal/SocketReceiveBufferSize[@min] (CYCLONEDDS_URI+0 line 1)
1703056843.335359 [0] 42048241: config: Domain/General/MulticastRecvNetworkInterfaceAddresses/#text: preferred {}
1703056843.335367 [0] 42048241: config: Domain/General/ExternalNetworkAddress/#text: auto {}
1703056843.335369 [0] 42048241: config: Domain/General/ExternalNetworkMask/#text: 0.0.0.0 {}
1703056843.335371 [0] 42048241: config: Domain/General/AllowMulticast/#text: default {}
1703056843.335372 [0] 42048241: config: Domain/General/MulticastTimeToLive/#text: 32 {}
1703056843.335374 [0] 42048241: config: Domain/General/DontRoute/#text: false {}
1703056843.335376 [0] 42048241: config: Domain/General/Transport/#text: udp {}
1703056843.335377 [0] 42048241: config: Domain/General/EnableMulticastLoopback/#text: true {}
1703056843.335379 [0] 42048241: config: Domain/General/MaxMessageSize/#text: 14720 B {}
1703056843.335380 [0] 42048241: config: Domain/General/MaxRexmitMessageSize/#text: 1456 B {}
1703056843.335381 [0] 42048241: config: Domain/General/FragmentSize/#text: 1344 B {}
1703056843.335383 [0] 42048241: config: Domain/General/RedundantNetworking/#text: false {}
1703056843.335384 [0] 42048241: config: Domain/General/EntityAutoNaming/#text: empty {}
1703056843.335386 [0] 42048241: config: Domain/General/EntityAutoNaming[@seed]: 981323711 2552571552 3736831470 568168365 1551115897 2539600070 2530733332 1219068320 {}
1703056843.335389 [0] 42048241: config: Domain/Sizing/ReceiveBufferSize/#text: 1 MiB {}
1703056843.335390 [0] 42048241: config: Domain/Sizing/ReceiveBufferChunkSize/#text: 128 KiB {}
1703056843.335392 [0] 42048241: config: Domain/Compatibility/StandardsConformance/#text: lax {}
1703056843.335393 [0] 42048241: config: Domain/Compatibility/ExplicitlyPublishQosSetToDefault/#text: false {}
1703056843.335395 [0] 42048241: config: Domain/Compatibility/ManySocketsMode/#text: single {}
1703056843.335396 [0] 42048241: config: Domain/Compatibility/AssumeRtiHasPmdEndpoints/#text: false {}
1703056843.335398 [0] 42048241: config: Domain/Discovery/Tag/#text: {}
1703056843.335399 [0] 42048241: config: Domain/Discovery/ExternalDomainId/#text: 0 {}
1703056843.335400 [0] 42048241: config: Domain/Discovery/DSGracePeriod/#text: 30 s {}
1703056843.335402 [0] 42048241: config: Domain/Discovery/ParticipantIndex/#text: none {}
1703056843.335403 [0] 42048241: config: Domain/Discovery/MaxAutoParticipantIndex/#text: 9 {}
1703056843.335412 [0] 42048241: config: Domain/Discovery/SPDPMulticastAddress/#text: 239.255.0.1 {}
1703056843.335414 [0] 42048241: config: Domain/Discovery/SPDPInterval/#text: 30 s {}
1703056843.335415 [0] 42048241: config: Domain/Discovery/DefaultMulticastAddress/#text: auto {}
1703056843.335417 [0] 42048241: config: Domain/Discovery/Ports/Base/#text: 7400 {}
1703056843.335418 [0] 42048241: config: Domain/Discovery/Ports/DomainGain/#text: 250 {}
1703056843.335420 [0] 42048241: config: Domain/Discovery/Ports/ParticipantGain/#text: 2 {}
1703056843.335421 [0] 42048241: config: Domain/Discovery/Ports/MulticastMetaOffset/#text: 0 {}
1703056843.335423 [0] 42048241: config: Domain/Discovery/Ports/UnicastMetaOffset/#text: 10 {}
1703056843.335424 [0] 42048241: config: Domain/Discovery/Ports/MulticastDataOffset/#text: 1 {}
1703056843.335426 [0] 42048241: config: Domain/Discovery/Ports/UnicastDataOffset/#text: 11 {}
1703056843.335427 [0] 42048241: config: Domain/Discovery/EnableTopicDiscoveryEndpoints/#text: false {}
1703056843.335429 [0] 42048241: config: Domain/Discovery/LeaseDuration/#text: 10 s {}
1703056843.335432 [0] 42048241: config: Domain/Tracing/Category/#text: fatal,error,warning,info,config {1}
1703056843.335434 [0] 42048241: config: Domain/Tracing/OutputFile/#text: stdout {1}
1703056843.335435 [0] 42048241: config: Domain/Tracing/AppendToFile/#text: false {}
1703056843.335436 [0] 42048241: config: Domain/Tracing/PacketCaptureFile/#text: {}
1703056843.335438 [0] 42048241: config: Domain/Internal/DeliveryQueueMaxSamples/#text: 256 {}
1703056843.335439 [0] 42048241: config: Domain/Internal/PrimaryReorderMaxSamples/#text: 128 {}
1703056843.335441 [0] 42048241: config: Domain/Internal/SecondaryReorderMaxSamples/#text: 128 {}
1703056843.335442 [0] 42048241: config: Domain/Internal/DefragUnreliableMaxSamples/#text: 4 {}
1703056843.335443 [0] 42048241: config: Domain/Internal/DefragReliableMaxSamples/#text: 16 {}
1703056843.335460 [0] 42048241: config: Domain/Internal/BuiltinEndpointSet/#text: writers {}
1703056843.335461 [0] 42048241: config: Domain/Internal/MeasureHbToAckLatency/#text: false {}
1703056843.335463 [0] 42048241: config: Domain/Internal/UnicastResponseToSPDPMessages/#text: true {}
1703056843.335464 [0] 42048241: config: Domain/Internal/SynchronousDeliveryPriorityThreshold/#text: 0 {}
1703056843.335466 [0] 42048241: config: Domain/Internal/SynchronousDeliveryLatencyBound/#text: inf {}
1703056843.335467 [0] 42048241: config: Domain/Internal/MaxParticipants/#text: 0 {}
1703056843.335468 [0] 42048241: config: Domain/Internal/AccelerateRexmitBlockSize/#text: 0 {}
1703056843.335470 [0] 42048241: config: Domain/Internal/RetransmitMerging/#text: never {}
1703056843.335471 [0] 42048241: config: Domain/Internal/RetransmitMergingPeriod/#text: 5 ms {}
1703056843.335473 [0] 42048241: config: Domain/Internal/HeartbeatInterval/#text: 100 ms {}
1703056843.335474 [0] 42048241: config: Domain/Internal/HeartbeatInterval[@min]: 5 ms {}
1703056843.335487 [0] 42048241: config: Domain/Internal/HeartbeatInterval[@minsched]: 20 ms {}
1703056843.335489 [0] 42048241: config: Domain/Internal/HeartbeatInterval[@max]: 8 s {}
1703056843.335490 [0] 42048241: config: Domain/Internal/MaxQueuedRexmitBytes/#text: 512 KiB {}
1703056843.335492 [0] 42048241: config: Domain/Internal/MaxQueuedRexmitMessages/#text: 200 {}
1703056843.335493 [0] 42048241: config: Domain/Internal/WriterLingerDuration/#text: 1 s {}
1703056843.335494 [0] 42048241: config: Domain/Internal/SocketReceiveBufferSize[@min]: 10 MiB {1}
1703056843.335496 [0] 42048241: config: Domain/Internal/SocketReceiveBufferSize[@max]: default {}
1703056843.335497 [0] 42048241: config: Domain/Internal/SocketSendBufferSize[@min]: 64 KiB {}
1703056843.335499 [0] 42048241: config: Domain/Internal/SocketSendBufferSize[@max]: default {}
1703056843.335500 [0] 42048241: config: Domain/Internal/NackDelay/#text: 100 ms {}
1703056843.335515 [0] 42048241: config: Domain/Internal/AckDelay/#text: 10 ms {}
1703056843.335517 [0] 42048241: config: Domain/Internal/AutoReschedNackDelay/#text: 3 s {}
1703056843.335518 [0] 42048241: config: Domain/Internal/PreEmptiveAckDelay/#text: 10 ms {}
1703056843.335519 [0] 42048241: config: Domain/Internal/ScheduleTimeRounding/#text: 0 s {}
1703056843.335521 [0] 42048241: config: Domain/Internal/DDSI2DirectMaxThreads/#text: 1 {}
1703056843.335522 [0] 42048241: config: Domain/Internal/SquashParticipants/#text: false {}
1703056843.335523 [0] 42048241: config: Domain/Internal/SPDPResponseMaxDelay/#text: 0 s {}
1703056843.335525 [0] 42048241: config: Domain/Internal/LateAckMode/#text: false {}
1703056843.335526 [0] 42048241: config: Domain/Internal/RetryOnRejectBestEffort/#text: false {}
1703056843.335527 [0] 42048241: config: Domain/Internal/GenerateKeyhash/#text: false {}
1703056843.335529 [0] 42048241: config: Domain/Internal/MaxSampleSize/#text: 2147483647 B {}
1703056843.335548 [0] 42048241: config: Domain/Internal/WriteBatch/#text: false {}
1703056843.335549 [0] 42048241: config: Domain/Internal/LivelinessMonitoring/#text: false {}
1703056843.335550 [0] 42048241: config: Domain/Internal/LivelinessMonitoring[@stacktraces]: true {}
1703056843.335552 [0] 42048241: config: Domain/Internal/LivelinessMonitoring[@interval]: 1 s {}
1703056843.335553 [0] 42048241: config: Domain/Internal/MonitorPort/#text: -1 {}
1703056843.335554 [0] 42048241: config: Domain/Internal/PrioritizeRetransmit/#text: true {}
1703056843.335556 [0] 42048241: config: Domain/Internal/UseMulticastIfMreqn/#text: 0 {}
1703056843.335557 [0] 42048241: config: Domain/Internal/RediscoveryBlacklistDuration/#text: 0 s {}
1703056843.335559 [0] 42048241: config: Domain/Internal/RediscoveryBlacklistDuration[@Enforce]: false {}
1703056843.335560 [0] 42048241: config: Domain/Internal/MultipleReceiveThreads/#text: default {}
1703056843.335561 [0] 42048241: config: Domain/Internal/MultipleReceiveThreads[@maxretries]: 4294967295 {}
1703056843.335574 [0] 42048241: config: Domain/Internal/Test/XmitLossiness/#text: 0 {}
1703056843.335576 [0] 42048241: config: Domain/Internal/Watermarks/WhcLow/#text: 1 KiB {}
1703056843.335577 [0] 42048241: config: Domain/Internal/Watermarks/WhcHigh/#text: 500 KiB {}
1703056843.335579 [0] 42048241: config: Domain/Internal/Watermarks/WhcHighInit/#text: 30 KiB {}
1703056843.335580 [0] 42048241: config: Domain/Internal/Watermarks/WhcAdaptive/#text: true {}
1703056843.335582 [0] 42048241: config: Domain/Internal/BurstSize/MaxRexmit/#text: 1 MiB {}
1703056843.335583 [0] 42048241: config: Domain/Internal/BurstSize/MaxInitTransmit/#text: 4294967295 {}
1703056843.335585 [0] 42048241: config: Domain/Internal/EnableExpensiveChecks/#text: [ignored] {}
1703056843.335586 [0] 42048241: config: Domain/TCP/NoDelay/#text: true {}
1703056843.335587 [0] 42048241: config: Domain/TCP/Port/#text: -1 {}
1703056843.335589 [0] 42048241: config: Domain/TCP/ReadTimeout/#text: 2 s {}
1703056843.335604 [0] 42048241: config: Domain/TCP/WriteTimeout/#text: 2 s {}
1703056843.335605 [0] 42048241: config: Domain/TCP/AlwaysUsePeeraddrForUnicast/#text: false {}
1703056843.335606 [0] 42048241: config: Domain[@id]: 0 {1}
1703056843.335686 [0] 42048241: started at 1703056843.06335609 -- 2023-12-20 15:20:43+08:00
1703056843.335737 [0] 42048241: udp initialized
1703056843.335871 [0] 42048241: interfaces: lo0 udp/127.0.0.1(q1) en0 wireless udp/192.168.25.232(q9)
1703056843.335875 [0] 42048241: selected interfaces: en0 (index 12 priority 0)
1703056843.335876 [0] 42048241: presumed flaky multicast, use for SPDP only
1703056843.335960 [0] 42048241: ownip: udp/192.168.25.232
1703056843.335963 [0] 42048241: extmask: invalid/0 (not applicable)
1703056843.335964 [0] 42048241: SPDP MC: udp/239.255.0.1
1703056843.335965 [0] 42048241: default MC: udp/239.255.0.1
1703056843.335966 [0] 42048241: SSM support included
1703056843.336436 [0] 42048241: failed to increase socket receive buffer size to at least 10485760 bytes, current is 786896 bytes
1703056843.336436 [0] 42048241: failed to increase socket receive buffer size to at least 10485760 bytes, current is 786896 bytes
1703056843.336457 [0] 42048241: udp finalized
Traceback (most recent call last):
File "scripts/nerfcapture2dataset.py", line 182, in
domain = Domain(domain_id=0, config=dds_config)
File "/opt/miniconda3/lib/python3.8/site-packages/cyclonedds/domain.py", line 34, in init
super().init(self._create_domain(dds_c_t.domainid(domain_id), config.encode("ascii")))
File "/opt/miniconda3/lib/python3.8/site-packages/cyclonedds/core.py", line 181, in init
raise DDSException(
cyclonedds.core.DDSException: [DDS_RETCODE_ERROR] Non specific error. Occurred upon initialisation of a cyclonedds.domain.Domain

online demo issue

Hi! Thanks for your impressive work.

When I'm trying to use the online demo, I got the following error:

Waiting for frames...
1/10 frames received
1/10 frames received
Traceback (most recent call last):
  File "/home/run/Data/SplaTAM/scripts/iphone_demo.py", line 561, in <module>
    dataset_capture_loop(reader, Path(config['workdir']), config['overwrite'], 
  File "/home/run/Data/SplaTAM/scripts/iphone_demo.py", line 156, in dataset_capture_loop
    images_dir.mkdir()
  File "/home/run/miniconda3/envs/splatam/lib/python3.10/pathlib.py", line 1175, in mkdir
    self._accessor.mkdir(self, mode)
FileExistsError: [Errno 17] File exists: 'experiments/iPhone_Captures/splatam_demo/rgb'
Seed set to: 0 (type: <class 'int'>)
Traceback (most recent call last):
  File "/home/run/Data/SplaTAM/viz_scripts/final_recon.py", line 297, in <module>
    visualize(scene_path, viz_cfg)
  File "/home/run/Data/SplaTAM/viz_scripts/final_recon.py", line 170, in visualize
    w2c, k = load_camera(cfg, scene_path)
  File "/home/run/Data/SplaTAM/viz_scripts/final_recon.py", line 26, in load_camera
    all_params = dict(np.load(scene_path, allow_pickle=True))
  File "/home/run/miniconda3/envs/splatam/lib/python3.10/site-packages/numpy/lib/npyio.py", line 427, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '././experiments/iPhone_Captures/splatam_demo/SplaTAM_iPhone/params.npz'

If I change the mkdir() to mkdir(exist_ok=True), the output became

Waiting for frames...
1/10 frames received
1/10 frames received
1/10 frames received
1/10 frames received

and keep repeating. It seems my program failed to receive continuous sequential images.

Do you have any suggestions?

Debugging Failure on Custom 3DScanner App Data

Thank you so much to the authors for your wonderful work on SplaTAM and to @ironjr for the script to extract the point cloud. I just made a run on our data and had an issue.

I used the data collected by 3DScanner App using iPhone 12 pro which includes the RGB images, depth images and transforms.json. The resulting point cloud is like spiral and each frame is seperated from each other.
Capture

Error creating environment

Hi! I'm having some problems installing the chamferdist module:

 In file included from chamferdist/knn.cu:4:
      /home/david/miniconda3/envs/splatam/lib/python3.10/site-packages/torch/include/ATen/cuda/CUDAContext.h:6:10: fatal error: cusparse.h: No such file or directory
          6 | #include <cusparse.h>
            |          ^~~~~~~~~~~~
      compilation terminated.
      error: command '/home/david/miniconda3/envs/splatam/bin/nvcc' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for chamferdist
  Running setup.py clean for chamferdist
Failed to build chamferdist
ERROR: Could not build wheels for chamferdist, which is required to install pyproject.toml-based projects
`

I've installed cuda 11.6 by conda install -c "nvidia/label/cuda-11.6.0" cuda-toolkit but the error keeps that way.

Any suggestions? The error appears when trying to run the demo

Environment Error on Windows

Sorry, if I just made a stupid mistake, I'm not so familiar with anaconda, but I tried to create the environment from your blueprint with conda env create -f environment.yml and after a while, this error shows up:

Installing pip dependencies: - Ran pip subprocess with arguments:
['C:\\Users\\Lukas\\anaconda3\\envs\\splatam\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\\Users\\Lukas\\Code\\external\\SplaTAM\\condaenv.h3hrhgos.requirements.txt', '--exists-action=b']
Pip subprocess output:
Collecting git+https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 (from -r C:\Users\Lukas\Code\external\SplaTAM\condaenv.h3hrhgos.requirements.txt (line 5))
  Cloning https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 to c:\users\lukas\appdata\local\temp\pip-req-build-t9ur40ad

Pip subprocess error:
  Running command git clone --filter=blob:none --quiet https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 'C:\Users\Lukas\AppData\Local\Temp\pip-req-build-t9ur40ad'
  fatal: repository 'https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110/' not found
  error: subprocess-exited-with-error

  × git clone --filter=blob:none --quiet https://github.com/JonathonLuiten/diff-gaussian-rasterization-w-depth/tree/cb65e4b86bc3bd8ed42174b72a62e8d3a3a71110 'C:\Users\Lukas\AppData\Local\Temp\pip-req-build-t9ur40ad' did not run successfully.
  │ exit code: 128
  ╰─> See above for output.

Do you have any idea why the repo could not be found? I can see the repository if I copy the link to my browser.

If it matters, I'm testing this on windows.

3DGS Densification Config for SplaTAM

Hi, thanks for this brilliant work.
I came across a bug running an offline demo on the replica dataset. I found that in the config file, use gaussian splatting-based densification is set to false:

use_gaussian_splatting_densification=False, # Use Gaussian Splatting-based Densification during Mapping

When I set this config to True, a bug appeared when training:

File "/home/user/splatam/SplaTAM/utils/slam_external.py", line 101, in accumulate_mean2d_gradient    variables['means2D_gradient_accum'][variables['seen']] += torch.norm(IndexError: The shape of the mask [853079] at index 0 does not match the shape of the indexed tensor [853064] at index 0

And this shape mismatch error can be reproduced in other dataset demos.

ScanNet++ Undistorted Depth

Request through Email:

It is really nice to find your SplaTAM, and I'm now trying to run your code.
I find in ScanNet++ dataloader, there are folders /undistorted_images
and /undistorted_depths.
However, the ScanNet++'s toolbox shows

python -m dslr.undistort dslr/configs/undistort.yml

which merely gives the /undistorted_images .

Would it be convenient to tell me how to get the /undistorted_depths ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.