Giter Club home page Giter Club logo

vortx's People

Contributors

noahstier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

vortx's Issues

Cannot format ScanNet for VoRTX

After downloading the whole dataset from Scannet and trying to run the following command to format the data:

python tools/preprocess_scannet.py --src path/to/scannet_src --dst path/to/new/scannet_dst

I'm getting some errors because I don't have the following folders inside each scan:

color/
depth/
intrinsic/
pose/

Here is a sample of how my scan folders looks like:

image

Do you know why I don't have those folders? I spent weeks downloading the whole dataset from Scannet and I'm stuck at this point.

Can you help me?

Load model using pretrained checkpoints

Hi,

I downloaded the pretrained weight vortx_pretrained.ckpt; and copied the sample_config.yml and renamed it config.yml.

When I used load_model funtion and passed these 2 as parameters, I got the following msg:
def init(self, config):
super().init()

     config["attn_heads"], config["attn_layers"], config["use_proj_occ"]

)
self.config = config

TypeError: string indices must be integers

Can anyone help? Thanks!

Tao

Training does not detect GPUs

Hi, I'm stuck at this for hours, and every try is a pain, just by the fact that every conda build and installation takes so long.

My main issue is that I followed all the steps to install all dependencies needed for this project, but when I try to start the training process

python scripts/train.py --config config.yml

I just get this:

home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MNASNet1_0_Weights.IMAGENET1K_V1`. You can also use `weights=MNASNet1_0_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Traceback (most recent call last):
  File "/media/darkayserleo/Data/vortx/scripts/train.py", line 60, in <module>
    trainer = pl.Trainer(
  File "/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 38, in insert_env_defaults
    return fn(self, **kwargs)
  File "/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 426, in __init__
    gpu_ids, tpu_cores = self._parse_devices(gpus, auto_select_gpus, tpu_cores)
  File "/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1525, in _parse_devices
    gpu_ids = device_parser.parse_gpu_ids(gpus)
  File "/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/pytorch_lightning/utilities/device_parser.py", line 89, in parse_gpu_ids
    return _sanitize_gpu_ids(gpus)
  File "/home/darkayserleo/anaconda3/envs/vortx2/lib/python3.9/site-packages/pytorch_lightning/utilities/device_parser.py", line 151, in _sanitize_gpu_ids
    raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: You requested GPUs: [0]
 But your machine only has: []
wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
wandb: ๐Ÿš€ View run glorious-valley-7 at: https://wandb.ai/leonelos/vortx/runs/yd5za7au
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 1 other file(s)
wandb: Find logs at: ./wandb/run-20230705_192250-yd5za7au/logs

I tried by using pytorch 1.4 and it throws a lot of incompatibilities...

The instructions on the readme are installing a newest version of Pytorch I mean 2.0, perhaps that's the reason why pytorch_lightning is not working.

Can you help me please. I've had to stop working on this because I didn't have the hardware to run the code a few months ago, now I got a better hardware setup and I managed to run the format scannet to vortex, I run the tsdf build, but now when I want to train everything I got stuck.

Also when I tried to fix this issue I delete my old vortx conda env (which I built months ago) with everything working and now when I try to run all the previous steps like format from scannet to vortx and building the stdf it stopped working, it's a complete mess.

Now my vortx env is broken, I can't do nothing, please I really need a hand with this.

My assumption is that some of the dependencies are installing their newest version, but the ones you are specifing:

pytorch-lightning==1.5
scikit-image==0.18
pip install git+https://github.com/mit-han-lab/[email protected]

are staying in those version, perhaps there is some kind of incompatibility. Please help!

EDITED: I tried installing everything again in a new conda env called vortx2, then I added the following lines to .bashrc

export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export PATH=$CUDA_HOME/bin:$PATH

and generate_gt is working as expected.

But still

python scripts/train.py --config config.yml

is not working

Parameters for inference on ICL-NUIM and TUM RGB-D data

Hello!
Thank you for you work and code!
Could you please explain the details of inference on ICL-NUIM and TUM RGB-D datasets that you described in your paper?
I try to inference the ICL-NUIM scenes using the same checkpoint and the same parameters as written in the config for ScanNet test, but the reconstructions turn out be much worse and I do not get the metrics you got.

Thank you in advance

Failing to format ScanNet for VoRTX

Hi, I want to pre-process the entire scannet dataset as you mention on README.

python tools/preprocess_scannet.py --src path/to/scannet_src --dst path/to/new/scannet_dst

But it fails.

According to https://github.com/noahstier/vortx/blob/main/tools/preprocess_scannet.py, it says that it expects the following folder structure:

scannet_src/
    scannetv2_train.txt
    scannetv2_val.txt
    scannetv2_test.txt
    scans/
        scene_****_**/
            scene_****_**_vh_clean_2.ply
            color/
            depth/
            intrinsic/
            pose/
    scans_test/
        scene_****_**/
            scene_****_**_vh_clean_2.ply
            color/
            depth/
            intrinsic/
            pose/

However I don't have the following files:

  • scannetv2_train.txt
  • scannetv2_val.txt
  • scannetv2_test.txt

This is the entire dataset I got:

image

As you can see I got scans and scans_tests, also another folder called tasks, but I don't have those txt files and that's the reason is throwing me errors.

Please can you help me? I really spent more than a month trying to run your project, and most of the time I'v been trying to download the entire dataset and I don't want to feel like I wasted so much time.

Thanks in advance!

To generate ground truth tsdf:

:~/vortx$ python tools/generate_gt.py --data_path scannet_src --save_name scannet_tsdf
0it [00:00, ?it/s]le_worker pid=23809)
0%| | 0/1 [00:00<?, ?it/s])
(process_with_single_worker pid=23811) read from disk
0it [00:00, ?it/s]le_worker pid=23813)
0it [00:00, ?it/s]le_worker pid=23812)
0it [00:00, ?it/s]le_worker pid=23815)
0it [00:00, ?it/s]le_worker pid=23810)
0it [00:00, ?it/s]le_worker pid=23814)
0it [00:00, ?it/s]le_worker pid=23816)
0it [00:00, ?it/s]le_worker pid=23818)
0it [00:00, ?it/s]le_worker pid=23817)
(process_with_single_worker pid=23811) scene0191_00: read frame 0/5578
0it [00:00, ?it/s]le_worker pid=23905)
0it [00:00, ?it/s]le_worker pid=24006)
0it [00:00, ?it/s]le_worker pid=23921)
0it [00:00, ?it/s]le_worker pid=24017)
0it [00:00, ?it/s]le_worker pid=23952)
0it [00:00, ?it/s]le_worker pid=23994)
(process_with_single_worker pid=23811) scene0191_00: read frame 100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 500/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 600/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 700/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 800/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 900/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1000/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1500/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1600/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1700/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1800/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 1900/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2000/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2500/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2600/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2700/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2800/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 2900/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3000/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3500/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3600/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3700/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3800/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 3900/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4000/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4500/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4600/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4700/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4800/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 4900/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5000/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5100/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5200/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5300/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5400/5578
(process_with_single_worker pid=23811) scene0191_00: read frame 5500/5578
(process_with_single_worker pid=23811) Initializing voxel volume...
Traceback (most recent call last):
File "/home/zxb/vortx/tools/generate_gt.py", line 489, in
results = ray.get(ray_worker_ids)
File "/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/ray/worker.py", line 1809, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(CompileError): ray::process_with_single_worker() (pid=23811, ip=172.31.151.121)
File "/home/zxb/vortx/tools/generate_gt.py", line 434, in process_with_single_worker
save_tsdf_full(args, scene, cam_intr, depth_all, cam_pose_all, color_all, save_mesh=False)
File "/home/zxb/vortx/tools/generate_gt.py", line 285, in save_tsdf_full
tsdf_vol_list.append(TSDFVolume(vol_bnds, voxel_size=args.voxel_size * 2 ** l, margin=args.margin))
File "/home/zxb/vortx/vortx/tsdf_fusion.py", line 94, in init
self._cuda_src_mod = SourceModule(
File "/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/pycuda/compiler.py", line 349, in init
cubin = compile(
File "/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/pycuda/compiler.py", line 298, in compile
return compile_plain(source, options, keep, nvcc, cache_dir, target)
File "/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/pycuda/compiler.py", line 151, in compile_plain
raise CompileError(
pycuda.driver.CompileError: nvcc compilation of /tmp/tmpuah84jjs/kernel.cu failed
[command: nvcc --cubin -arch sm_86 -I/home/zxb/anaconda3/envs/vortx/lib/python3.9/site-packages/pycuda/cuda kernel.cu]
[stderr:
nvcc fatal : Value 'sm_86' is not defined for option 'gpu-architecture'

scannet data

Can you tell me which part of the data I need to download, and the specific operation of the subsequent processing process?

TUM-RGBD dataset

Hi, in the paper it writes 13 scenes of TUM-RGBD dataset are used for evaluation, but I did not find more information about the TUM-RGBD dataset used in this work.
May I ask which 13 scenes you use in your work?

Clarification Needed on im_z_norm in mv_fusion

I noticed that the im_z_norm value is computed as (bp_depth - 1.85) / 0.85 in the forward method.

https://github.com/noahstier/vortx/blob/c8cb0707013fc5f7eb4b941c3f354ba18c636d6a/vortx/mv_fusion.py#L36C1-L37C1

I'm a bit confused about why 1.85 and 0.85 are used for normalization. I couldn't find where it is mentioned in the paper. Can you please elaborate on these constants and the rationale behind using them?
What do 1.85 and 0.85 represent?
Are these constants specific to a certain dataset or experiment setup?
Is there a recommended way to adapt these for different use-cases?
Thank you for your time and assistance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.