Giter Club home page Giter Club logo

clip-fields's Introduction

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

Teaching robots in the real world to respond to natural language queries with zero human labels — using pretrained large language models (LLMs), visual language models (VLMs), and neural fields.

[Paper] [Website] [Code] [Data] [Video]

Authors: Mahi Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, Arthur Szlam.

warm_up_my_lunch.mp4

Tl;dr CLIP-Field is a novel weakly supervised approach for learning a semantic robot memory that can respond to natural language queries solely from raw RGB-D and odometry data with no extra human labelling. It combines the image and language understanding capabilites of novel vision-language models (VLMs) like CLIP, large language models like sentence BERT, and open-label object detection models like Detic, and with spatial understanding capabilites of neural radiance field (NeRF) style architectures to build a spatial database that holds semantic information in it.

Installation

To properly install this repo and all the dependencies, follow these instructions.

# Clone this repo.
git clone --recursive https://github.com/notmahi/clip-fields
cd clip-fields

# Create conda environment and install the dependencies.
conda create -n cf python=3.8
conda activate cf
conda install -y pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch-lts -c nvidia
pip install -r requirements.txt

# Install the hashgrid encoder with the relevant cuda module.
cd gridencoder
# For this part, it may be necessary to find out what your nvcc path is and use that, 
# For me $which nvcc gives public/apps/cuda/11.8/bin/nvcc, so I used the following part
# export CUDA_HOME=/public/apps/cuda/11.8
python setup.py install
cd ..

Interactive Tutorial and Evaluation

We have an interactive tutorial and evaluation notebook that you can use to explore the model and evaluate it on your own data. You can find them in the demo/ directory, that you can run after installing the dependencies.

Training a CLIP-Field directly

Once you have the dependencies installed, you can run the training script train.py with any .r3d files that you have! If you just want to try out a sample, download the sample data nyu.r3d and run the following command.

python train.py dataset_path=nyu.r3d

If you want to use LSeg as an additional source of open-label annotations, you should download the LSeg demo model and place it in the path_to_LSeg/checkpoints/demo_e200.ckpt. Then, you can run the following command.

python train.py dataset_path=nyu.r3d use_lseg=true

You can check out the config/train.yaml for a list of possible configuration options. In particular, if you want to train with any particular set of labels, you can specify them in the custom_labels field in config/train.yaml.

Acknowledgements

We would like to thank the following projects for making their code and models available, which we relied upon heavily in this work.

clip-fields's People

Contributors

cpaxton avatar notmahi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clip-fields's Issues

Installation issues #2 (CUDA, Detectron2, etc)

Here is an error that occurred before updating GitHub that was not present before.

  1. When installing with the code provided in the installation manual, a Detectron2 installation error occurs.
    Screenshot from 2024-03-04 09-47-13
  • conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
  • conda install -y pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch-lts -c nvidia

To avoid the Detectron2 error,
we need to create a Conda environment using the second code.

  1. When completing the installation and proceeding with training, a "ModuleNotFoundError" error occurs. (ModuleNotFoundError: No module named 'additional_utils')
    Screenshot from 2024-03-04 10-47-44

I find OK-Robot research fascinating,
I've been experimenting with various installation methods and troubleshooting.

When installing with the updated code from two days ago, additional errors occurred.

However, when conducting experiments with the GitHub repository downloaded two days ago,
there seem to be no issues at all.

How to switch from home robot to turtle 3?

Hi

Since CLIP-Field results in 3D coordinates,
if we only experiment with navigation using the coordinates of the detected object, can we connect to Turtle 3 instead of Home-Robot?

If we were to conduct an experiment with Turtle 3, what information could we refer to for testing?

And for navigation, could you share information on how to connect with Turtle 3?

Could you share how to sequentially connect the Turtle 3 robots?

Thank you in advance for your valuable answer.

Installation issues #1 (CUDA, Detectron2, etc)

We are using an RTX 3060 ti GPU, and unfortunately, we are unable to install CUDA versions 11.1-11.3.

Therefore, we are attempting to experiment with CUDA 11.8 or higher versions to install the Detectron2 code.

If you have any experience with such a task, would you be willing to share your valuable experience with us?"

  • OS: Ubuntu 20.04 LTS
  • GPU: RTX 3060 ti
  • CUDA: 11.8 (Unable to install CUDA versions 11.1-11.3 because of CUDA GPUs Compute Capability)

"We are interested in running the OK-Robot code to explore its capabilities. In doing so, we aim to recognize scenes in a new environment and perform robot tasks, which leads us to believe that the CLIP-Fields model is necessary.

However, we noticed that the OK-Robot uploaded a few days ago does not require the installation of the CLIP-Fields model on this page(https://github.com/ok-robot/ok-robot?tab=readme-ov-file).

Could you kindly confirm if this is the case?"

Thank you in advance for your kind cooperation

Last step of installation throws errors

Hi there,

This is super exciting work and I just wanted to tinker around with it, but it seems the last step in the installation instruction is throwing an error.

If we can't fix the issue, I would also appreciate it if you could share the exact environment (os and linux kernel version) that you're using so I can see if the error still occurs there.

My environment info:

Operating System: Pop!_OS 22.04 LTS               
          Kernel: Linux 6.0.2-76060002-generic
    Architecture: x86-64

So everything upto the last section of installing gridencoder works perfectly.

The error is thrown in the last step when running python setup.py install, but before that, the instructions say to locate your nvcc path and set that as an environment variable, which I did. On my machine when I run which nvcc I get

(base) kuwajerw@pop-os [08:45:58PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$ which nvcc
/usr/bin/nvcc
(base) kuwajerw@pop-os [08:51:12PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$

However when I run locate nvcc | grep /nvcc$ I get a few more options:

(base) kuwajerw@pop-os [08:51:41PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$ locate nvcc | grep /nvcc$
/usr/bin/nvcc
/usr/lib/nvidia-cuda-toolkit/bin/nvcc
/usr/local/cuda-11.8/bin/nvcc
(base) kuwajerw@pop-os [08:51:47PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$ 

I tried setting my CUDA_HOME environment variable to all three paths, one at a time, and ran the python setup.py install

# each time I would run the `python setup.py install`, I would uncomment one of these lines in my bashrc and start a new terminal window
# export CUDA_HOME=/usr
# export CUDA_HOME=/usr/local/cuda-11.8/
# export CUDA_HOME=/usr/lib/nvidia-cuda-toolkit

But each time I always get the same error:

(cf) kuwajerw@pop-os [08:54:53PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$ python setup.py install
/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at  /opt/conda/conda-bld/pytorch_1627336325426/work/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
No CUDA runtime is found, using CUDA_HOME='/usr/lib/nvidia-cuda-toolkit'
running install
/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing gridencoder.egg-info/PKG-INFO
writing dependency_links to gridencoder.egg-info/dependency_links.txt
writing top-level names to gridencoder.egg-info/top_level.txt
reading manifest file 'gridencoder.egg-info/SOURCES.txt'
writing manifest file 'gridencoder.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building '_gridencoder' extension
Traceback (most recent call last):
  File "setup.py", line 44, in <module>
    setup(
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
    self.run_command(cmd)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
    self.distribution.run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
    self.distribution.run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
    self.distribution.run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 708, in build_extensions
    build_ext.build_extensions(self)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions
    self._build_extensions_serial()
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 547, in build_extension
    objects = self.compiler.compile(
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 524, in unix_wrap_ninja_compile
    cuda_post_cflags = unix_cuda_flags(cuda_post_cflags)
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 423, in unix_cuda_flags
    cflags + _get_cuda_arch_flags(cflags))
  File "/home/kuwajerw/anaconda3/envs/cf/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags
    arch_list[-1] += '+PTX'
IndexError: list index out of range
(cf) kuwajerw@pop-os [08:54:59PM 13/11/2022]:
(main) ~/repos/clip-fields/gridencoder/
$ 

So I'm not sure what I'm doing wrong exactly? Are you using ubuntu 22.04? Anything you can suggest I try?

Additionally if I ignore this error and just try to run the demo notebook clip-fields/demo/1 - parse rgbd.ipynb I am stuck because I don't have an RGB-D video myself, or an iPhone 13. Would it be possible for you to supply a demo file, or a link to one? Thank you.

About time cost on training

Hi, Mahi,

Great work!
Would it be convenient to tell me how long it takes to train the model under paper's setting?

Very best,
Jarro

Runtime Error

Hi,
after I setting up the environment following the steps in the tutorial, there is a Runtime Error when I try to run the python train.py dataset_path=nyu.r3d, does anyone has the same problem? And I would like to kindly ask you what is your pytorch version, cause the conda install -y pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch-lts -c nvidia automatically installs the newest version of pytorch.

the following is the error report
`RuntimeError: Error building extension 'enclib_gpu': [1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/TH -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/qiwen/anaconda3/envs/cf/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' --expt-extended-lambda -std=c++17 -c /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/lib_ssd.cu -o lib_ssd.cuda.o
FAILED: lib_ssd.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/TH -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/qiwen/anaconda3/envs/cf/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' --expt-extended-lambda -std=c++17 -c /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/lib_ssd.cu -o lib_ssd.cuda.o
/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/lib_ssd.cu:22:10: fatal error: THC/THCNumerics.cuh: No such file or directory
22 | #include <THC/THCNumerics.cuh>
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[2/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/TH -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/qiwen/anaconda3/envs/cf/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' --expt-extended-lambda -std=c++17 -c /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu -o rectify_cuda.cuda.o
FAILED: rectify_cuda.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/TH -isystem /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/qiwen/anaconda3/envs/cf/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' --expt-extended-lambda -std=c++17 -c /home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu -o rectify_cuda.cuda.o
/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu(114): error: identifier "ScalarConvert" is undefined

/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu(114): error: type name is not allowed

/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu(114): error: type name is not allowed

/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu(114): error: the global scope has no "to"

4 errors detected in the compilation of "/home/qiwen/anaconda3/envs/cf/lib/python3.8/site-packages/encoding/lib/gpu/rectify_cuda.cu".
ninja: build stopped: subcommand failed.`

Out Of Memory - RAM

Hi! I finished the installation, and wanted to try Training a CLIP-Field directly by doing:
python train.py dataset_path=nyu.r3d
I amb monitoring both RAM and GPUram. I see that when the code starts, data starts to load on RAM:
Loading data: 100%|████████████████████████████████████████████████████████| 757/757 [00:05<00:00, 135.05it/s]
Upscaling depth and conf: 100%|████████████████████████████████████████████| 757/757 [00:04<00:00, 157.72it/s]
Calculating global XYZs: 100%|██████████████████████████████████████████████| 757/757 [00:14<00:00, 51.72it/s]
The previous code ocupies about 30GB of RAM.
Then models such as Detic load on GPU, but when I arrive to line 177, /dataloaders/real_dataset.py my PC kills the process because of RAM OOM:
# First, setup detic with the combined classes. self._setup_detic_all_classes(view_data)
Why is data load on ram and not on GPU? Is there any way to lower the GB of memory used by using batches?

Demonstration issues #2 (Memory)

Hi!

I experimented with a new environment using the Record3D App (Data size: 380MB).

However, the process of loading and computing data in the Jupyter Notebook code (1-parse rgbd.ipynb), particularly in the first cell, consumes excessive memory.

Although my computer has 256GB of memory, it nearly reaches around 200GB of usage.

As a result, the Jupyter Notebook often crashes during the last code cell, which is responsible for saving the data as a .pth file.

Could you please share any possible solutions for this issue?

Additionally, if I need more memory for experimenting with larger environments in the future,
how can I address this?

For now, I plan to run the successful training code python train.py dataset_path=nyu.r3d instead of the Jupyter Notebook.

Thank you in advance for your kind response^^.

Demonstration issues #1 (Dataset)

After following the installation process on your GitHub,
and completing the training up to Jupyter Notebook 4-test model.ipynb demo,

After obtaining new data from the Record3D App and converting it to .r3d format,
I encountered an error when running the code in Jupyter Notebook 1-parse rgbd.ipynb .

I tried various methods to acquire the data and solve the issue, but it seems to be an issue with the input data itself.

Is there any additional setting when acquiring data from the app, or do I have to use an iPhone 13 Pro?

I obtained the data using an iPhone 14+, so I'm wondering if there might be compatibility

If you are training using your nyu.r3d dataset, the computation progresses well.

Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.