Giter Club home page Giter Club logo

4dmos's Introduction

Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions

Python Linux PRs Welcome Paper MIT license

example Our moving object segmentation on the unseen SemanticKITTI test sequences 18 and 21. Red points are predicted as moving.

Table of Contents

  1. Publication
  2. Overview
  3. Data
  4. Installation
  5. Running the Code
  6. Evaluation and Visualization
  7. Benchmark
  8. Pretrained Model
  9. License

Publication

If you use our code in your academic work, please cite the corresponding paper:

@article{mersch2022ral,
author = {B. Mersch and X. Chen and I. Vizzo and L. Nunes and J. Behley and C. Stachniss},
title = {{Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions}},
journal={IEEE Robotics and Automation Letters (RA-L)},
year = 2022,
volume = {7},
number = {3},
pages = {7503--7510},
}

Please find the corresponding video here.

Overview

Given a sequence of point clouds, our method segments moving (red) from non-moving (black) points.

We first create a sparse 4D point cloud of all points in a given receding window. We use sparse 4D convolutions from the MinkowskiEngine to extract spatio-temporal features and predict per-points moving object scores.

Data

Download the SemanticKITTI data from the official website.

./
└── sequences
  ├── 00/           
  │   ├── velodyne/	
  |   |	├── 000000.bin
  |   |	├── 000001.bin
  |   |	└── ...
  │   └── labels/ 
  |       ├── 000000.label
  |       ├── 000001.label
  |       └── ...
  ├── 01/ # 00-10 for training
  ├── 08/ # for validation
  ├── 11/ # 11-21 for testing
  └── ...

Installation

Clone this repository in your workspace with

git clone https://github.com/PRBonn/4DMOS

With Docker

We provide a Dockerfile and a docker-compose.yaml to run all docker commands with a simple Makefile.

To use it, you need to

  1. Install Docker

  2. In Ubuntu, install docker-compose with

    sudo apt-get install docker-compose

    Note that this will install docker-compose v1.25 which is recommended since GPU access during build time using docker-compose v2 is currently an open issue.

  3. Install the NVIDIA Container Toolkit

  4. IMPORTANT To have GPU access during the build stage, make nvidia the default runtime in /etc/docker/daemon.json:

    {
        "runtimes": {
            "nvidia": {
                "path": "/usr/bin/nvidia-container-runtime",
                "runtimeArgs": []
            } 
        },
        "default-runtime": "nvidia" 
    }

    Save the file and run sudo systemctl restart docker to restart docker.

  5. Build the image with all dependendencies with

    make build

Before running the container, you need to set the path to your dataset:

export DATA=path/to/dataset/sequences

To test that your container is running propoerly, do

make test

Finally, run the container with

make run

You can now work inside the container and run the training and inference scripts.

Without Docker

Without Docker, you need to install the dependencies specified in the setup.py. This can be done in editable mode by running

python3 -m pip install --editable .

Now install the MinkowskiEngine according to their installation wiki page. When installing the MinkowskiEngine, your CUDA version has to match the CUDA version that was used to compile PyTorch.

Running the Code

If not done yet, specify the path to the SemanticKITTI data:

export DATA=path/to/dataset/sequences

If you use Docker, you now need to run the container with make run.

Training

To train a model with the parameters specified in config/config.yaml, run

python scripts/train.py

Find more options like loading weights from a pre-trained model or checkpointing by passing the --help flag to the command above.

Inference

Inference is done in two steps. First, predicting moving object confidence scores and second, fusing multiple confidence values to get a final prediction (non-overlapping strategy or binary Bayes filter.

To infer the per-point confidence scores for a model checkpoint at path/to/model.ckpt, run

python scripts/predict_confidences.py -w path/to/model.ckpt

We provide several additional options, see --help flag. The confidence scores are stored in predictions/ID/POSES/confidences to distinguish setups using different model IDs and pose files.

Next, the final moving object predictions can be obtained by

python scripts/confidences_to_labels.py -p predictions/ID/POSES

You can use the --strategy argument to decide between the non-overlapping or bayesian filter strategy from the paper. Run with --help to see more options. The final predictions are stored in predictions/ID/POSES/labels/.

Evaluation and Visualization

We use the SemanticKITTI API to evaluate the intersection-over-union (IOU) of the moving class as well as to visualize the predictions. Clone the repository in your workspace, install the dependencies and then run the following command to visualize your predictions for e.g. sequence 8:

cd semantic-kitti-api
./visualize_mos.py --sequence 8 --dataset /path/to/dataset --predictions /path/to/4DMOS/predictions/ID/POSES/labels/STRATEGY/

Benchmark

To submit the results to the LiDAR-MOS benchmark, please follow the instructions here.

Pretrained Models

License

This project is free software made available under the MIT License. For details see the LICENSE file.

4dmos's People

Contributors

benemer avatar chen-xieyuanli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

4dmos's Issues

Trying 4DMOS on already aligned chunks

Hi Benemer,

I have several LAS files which are already registered in a global UTM-zone35N coordinate system. To use 4DMOS, I create chunks of my data with 250k points in each chunk with no overlap and convert it to bin format. Since the data is aligned, there is no pose information, so I used pretrained model D=5_scans_no_poses.ckpt to run the inference. Unfortunately, the results are not good at all. See attached images. I even offset the las files before chunking to the origin=[0, 0, 0] to make sure coordinates are not very large but it didn't help. Do you have any suggestions? How can I create poses for aligned data and use other pretrained models? Or that's not a problem at all?

image
Image of three consecutive chunks.

image
Results of the predictions with no poses. Red points are predicted MOS.

image
Expected results

RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered

Thank you for your great work, and sharing the code.
I am trying to train my own dataset.The format of dataset is the same as KITTI' data.On the training process, I have encountered such problem.

Traceback (most recent call last):
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
    results = self._run_stage()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
    return self._run_train()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
    self.fit_loop.run()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
    result = self._run_optimization(
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step
    return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/optim/adam.py", line 92, in step
    loss = closure()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure
    closure_result = closure()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 143, in closure
    self._backward_fn(step_output.closure_loss)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 311, in backward_fn
    self.trainer._call_strategy_hook("backward", loss, optimizer, opt_idx)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 168, in backward
    self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 80, in backward
    model.backward(closure_loss, optimizer, *args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1391, in backward
    loss.backward(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
    Variable._execution_engine.run_backward(
RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered.
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "scripts/train.py", line 87, in <module>
    main()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "scripts/train.py", line 83, in main
    trainer.fit(model, data, ckpt_path=checkpoint)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 738, in _call_and_handle_interrupt
    self._teardown()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1300, in _teardown
    self.strategy.teardown()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/strategies/single_device.py", line 96, in teardown
    self.lightning_module.cpu()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 147, in cpu
    return super().cpu()
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 710, in cpu
    return self._apply(lambda t: t.cpu())
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 593, in _apply
    param_applied = fn(param)
  File "/home/orcak80/miniconda3/envs/pcs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 710, in <lambda>
    return self._apply(lambda t: t.cpu())
RuntimeError: CUDA error: an illegal memory access was encountered

It's such like a problem when processing the backward function. Could you tell me the reason about that? Thank you!

Does using closed loop poses have a significant impact on recognition results?

Dear author
I used the model you trained 10_ Scans ran on my own computer (not using radar from the Kitti dataset, but their detection accuracy is similar to that of a laser beam) and was pleased that moving objects can be detected, but some moving objects cannot be detected. The positions I used are closed loop poses. Does this closed loop poses I used have an impact on the detection results?
Look forward to your reply!

predict confidences when using pose

I'm sorry to bother you again.when I tried to use pose to predict,I had a problem.The pose file I used is the one you provided without closed loop.

python scripts/predict_confidences.py -w 10_scans.ckpt -dt 0.1 -seq 8 -poses poses.txt
Ground truth poses are not avaialble.
Traceback (most recent call last):
File "scripts/predict_confidences.py", line 89, in
main()
File "/root/anaconda3/envs/py3-mink/lib/python3.7/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/root/anaconda3/envs/py3-mink/lib/python3.7/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/root/anaconda3/envs/py3-mink/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/anaconda3/envs/py3-mink/lib/python3.7/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "scripts/predict_confidences.py", line 72, in main
data.setup()
File "/home/robot/4DMOS/src/mos4d/data/datasets.py", line 47, in setup
train_set = KittiSequentialDataset(self.cfg, split="train")
File "/home/robot/4DMOS/src/mos4d/data/datasets.py", line 177, in init
self.poses[seq] = self.read_poses(path_to_seq)
File "/home/robot/4DMOS/src/mos4d/data/datasets.py", line 296, in read_poses
inv_frame0 = np.linalg.inv(poses[0])
IndexError: index 0 is out of bounds for axis 0 with size 0

UserWarning: GPUs can't support floating point data

UserWarning: GPUs can't support floating point data with more than 32-bits, precision will be lost due to downcasting to 32-bit float.
warnings.warn(F64_PRECISION_WARNING)
Within the code, Tthe model can be either 64-bit or 32-bit. However, why is it throwing an error here?

Problem with pretrained model

Dear Professor:
Thank you for your nice work about identify the moving objects on the ground. However, when I use the pretrained models, it shows that "it has error in loading state_dict for MOSNet". I don't know why and how to solve this problem. Is there some advice?

Screenshot from 2022-07-05 13-33-16
Yours
Qi Wu

Docker build issue

Hello,

while running make build, the build process gets stuck at

Step 16/17 : RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
 ---> Running in ad1c2386776b
Adding user `user' ...
Adding new user `user' (1545614636) with group `user' ...
Creating home directory `/home/user' ...
Copying files from `/etc/skel' ...

I waited but it does not proceed. Any idea why this might be happening?

Best,
Haider

Evaluation on 16 channel LiDAR pointcloud

First of all, thank you for great work and code release!

I have one question.
Is there any evaluation result on 16-channel LiDAR pointcloud or have you ever tested on them?
I tested 4DMOS on my custom dataset(VLP16, OS-64) and the result of OS-64 was awesome. But, in case of VLP-16, this algorithm can't catch moving obejects at all.
I wonder this caused by my mistake(ex. wrong input poses..) or the lack of various dataset for training!

Thank you!

Some question about pretrained model

Thank you for your nice work,i have some question about pretrained model
I downloaded the model “10_scans.ckpt” and try to eval it on semantic kitti dataset use semantic kitti API , but I can't get the results in the paper.
this is the result in paper: w/o bf 74.3
this is my result : w/o bf 52.2
Did I miss something?

How to visualize the result?

I have test the code on my own data and get the labels. But I don't know how to visualize the result, could you give me some advice about that ?

Performane on trained weights

Hello there,

I am playing back a rosbag file for the kitti sequence 14 at a rate of 0.1. Turns out that for me, the best performing weights are the 5_scans_no_poses. I transform all 5 scans to the world frame before feeding to the model. Could you give a hint into what I could debug?

Please see pictures below of the 5 scans visualized together in the world frame. We can see the trailing dynamic points clearly. The trail is clearly detected with the 5_scans_no_poses.ckpt and not with the 5_scans.ckpt. I get similar results for 2 and10 window weights.

5_scans_no_poses.ckpt
image

5_scans.ckpt
image

Best regards,
Haider

Deployment

Hello!!

I want to test the model trained on 10 scans in deployment and have a few questions about predict_step function.

  1. If I am only interested in the confidence scores of the current scan, would it enough to use logits = out.features_at(0)? Assuming that index 0 points here to 10th scan and index 9 points to the current scan.

  2. Do you have some deployment code which I can use to test with new sensor data?
    I am testing with the following code. The cuda_tensor_list consists of 10 scans where current scan is at index 9 and 10th scan is at index 0.

         with torch.no_grad():
             out = model.forward(cuda_tensor_list)
             logits = out.features_at(0)
             pred_softmax = F.softmax(logits, dim=1)
             pred_softmax = pred_softmax.detach().cpu().numpy()
             moving_confidence = pred_softmax[:, 2]
             pred_labels = np.ones_like(moving_confidence)
             pred_labels[moving_confidence > 0.5] = 2
             indexes_label_1 = np.where(pred_labels == 1)
             indexes_label_2 = np.where(pred_labels == 2)
    
             array_of_colors = np.zeros(shape=logits.shape)
             array_of_colors[indexes_label_1] = [0,1,0] # Points with label 1 are colored in Green.
             array_of_colors[indexes_label_2] = [1,0,0] # Points with label 2 are colored in Red.
    
             print(f"Dynamic Points", {np.sum(pred_labels==2)})
             print(f"Static Points", {np.sum(pred_labels==1)})
             if np.sum(pred_labels == 2) > 0:
                 geom = o3d.geometry.PointCloud()
                 geom.points = o3d.utility.Vector3dVector(points[0][:,:3])
                 geom.colors = o3d.utility.Vector3dVector(array_of_colors)
                 o3d.visualization.draw_geometries([geom])
    

Best,
Haider

About the gap between testing result and validation result on SemanticKITTI

Hi, thanks for sharing your great work.
I am confused with that your validation result is 77.2% shown in paper while your testing result is just 65.2%. Could you explain the gap between testing and validation result? Some issues like #5 indicate that the pose affects the performance significantly. Is it the reason resulting in this gap? Thanks.

Was able to run everything. All of a sudden, I got this error. TypeError: expected str, bytes or os.PathLike object, not NoneType"

Thank you for sharing this example.

~/4DMOS$ python scripts/train.py
/home/yc/miniconda3/lib/python3.11/site-packages/MinkowskiEngine-0.5.4-py3.11-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24.
warnings.warn(
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 3080') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
Traceback (most recent call last):
File "/home/yc/4DMOS/scripts/train.py", line 87, in
main()
File "/home/yc/miniconda3/lib/python3.11/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/miniconda3/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/yc/miniconda3/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/miniconda3/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/4DMOS/scripts/train.py", line 83, in main
trainer.fit(model, data, ckpt_path=checkpoint)
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 950, in _run
call._call_setup_hook(self) # allow user to setup lightning_module in accelerator environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 92, in _call_setup_hook
_call_lightning_datamodule_hook(trainer, "setup", stage=fn)
File "/home/yc/miniconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 179, in _call_lightning_datamodule_hook
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/yc/4DMOS/src/mos4d/datasets/datasets.py", line 47, in setup
train_set = KittiSequentialDataset(self.cfg, split="train")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yc/4DMOS/src/mos4d/datasets/datasets.py", line 172, in init
path_to_seq = os.path.join(self.root_dir, seqstr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "< frozen p osixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType

Cannot find the path

Hello Benedikt, i am sorry for asking every small details. I am trying to run

!python3 /content/4DMOS/scripts/predict_confidences.py -w /content/MinkowskiEngine/10_scans.ckpt

and got this error

Traceback (most recent call last):
File "/content/4DMOS/scripts/predict_confidences.py", line 84, in
main()
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/content/4DMOS/scripts/predict_confidences.py", line 72, in main
data.setup()
File "/usr/local/lib/python3.7/dist-packages/mos4d/datasets/datasets.py", line 47, in setup
train_set = KittiSequentialDataset(self.cfg, split="train")
File "/usr/local/lib/python3.7/dist-packages/mos4d/datasets/datasets.py", line 172, in init
path_to_seq = os.path.join(self.root_dir, seqstr)
File "/usr/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType

I assume this is from the data path. Actually, I have already exported the path : !export DATA=/content/4DMOS/dataset/sequences
and i also tried to change the self.root_path. However, the error still persists.

No module named mos4d

image

Hello, I am trying to run 4dmos in google colab. However, it shows the "no module named mos4d" and can not run the program. Do you have any idea how to solve this problem?

Thank you.

Model adaption for Movable object segmentation

Hello,

Firstly thanks for the nice work.

I understand that the model is designed to segment only moving objects by considering receding windows but I would like to know if it's possible to adapt the current model to be able to segment any "movable" objects (objects with the ability to move - even parked cars or static person)?
(However, I see that leveraging the problem into segmenting any movable objects instead of moving objects can be seen as naive semantic segmentation of all object classes but I believe still having the idea of receding windows will improve the performance.)

Results regarding Apollo

Thanks you very much for your work,

In you paper in Table II, you compare your methods with other on the Apollo dataset. Where did you find this dataset and its annotations ? I'd be interested in comparing my methods with yours on your dataset.

Best,

Jules

Other IoU MOS values in the evaluation

I tried to replicate your results with the pretrained models. I followed all the instructions from the readme. After setting everything up in docker, I use the following command to evaluate the pretraint model with 10 scan:

python scripts/predict_confidences.py -w xxx/xxx/10_scans.ckpt -dt 0.1 -seq 8
python scripts/confidences_to_labels.py -p predictions/ID/POSES -seq 8 -prior 0.25 -dt 0.1 -s bayes

IoU MOS evaluation

cd semantic-kitti-api
 ./evaluate_mos.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate

My result is:
57.0 this is less than you got in the paper of 77.2.
I don't understand why?

Custom visualization - mismatch data shape

Hi,

I'm trying to use the pretrained models for inference on custom data. I first convert my data to 4DMOS format, make it be a new sequence and run predict_confidences.py and confidence_to_labels.py with no issue.

Then, I would like to use the predicted labels to filter my raw point clouds. However, if I open the raw point cloud and the predicted label file, they do not have the same shape. Also, the shape of the label does not match the combination of several raw point clouds.

How should I attempt this?

Thank you!

TypeError: cannot pickle 'MinkowskiConvolutionFunction' object

GREAT WORK!
Traceback (most recent call last):
File "scripts/train.py", line 89, in
main()
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "scripts/train.py", line 85, in main
trainer.fit(model, data, ckpt_path=checkpoint)
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
self._call_and_handle_interrupt(
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/spawn.py", line 78, in launch
mp.spawn(
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/usr/local/envs/4dmos/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes
process.start()
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/local/envs/4dmos/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'MinkowskiConvolutionFunction' object

This happens when i set N_GPU: 2
How to fix this

some questions about MinkowskiEngine

Thanks for your nice repo firstly. And I find it looks diffcult to export onnx model for the tensorRT deploy because of MinkowskiEngine. I want to ask how to export onnx or deploy tensorRT in Orin? Thank you !!!

evaluate problem

I wanted to evaluate but ran into the following situation. The visualization works normally.
IMG_4687

Error building the docker

Thanks for the great work!

When I try building the docker I get the below error which is related to building MinkowskiEngine wheels. I checked on their webpage there are different types of errors related to cuda and pytorch but this error is different. Since I am using the docker in this project I was wondering if anyone encountered similar issue.

Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.5 LTS
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0

I only copied the error section:

make build

... 
Building wheel for MinkowskiEngine (setup.py): still running...
  Building wheel for MinkowskiEngine (setup.py): finished with status 'error'
  **ERROR: Command errored out with exit status 1**:
   command: /opt/conda/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-dgmhj87k
       cwd: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/
  Complete output (230 lines):
  WARNING: Skipping MinkowskiEngine as it is not installed.
  --------------------------------
  | CUDA compilation set         |
  --------------------------------
  
  Using BLAS=openblas
  Using the default compiler
  running bdist_wheel
  /opt/conda/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
    warnings.warn(msg.format('we could not find ninja.'))
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.7
  creating build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/diagnostics.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiConvolution.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiCoordinateManager.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiBroadcast.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiUnion.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiNetwork.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiKernelGenerator.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiFunctional.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiNonlinearity.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiPooling.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiChannelwiseConvolution.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiCommon.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/sparse_matrix_functions.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiTensor.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiSparseTensor.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiNormalization.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiTensorField.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiOps.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/__init__.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiInterpolation.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  copying ./MinkowskiEngine/MinkowskiPruning.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine
  creating build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/quantization.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/coords.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/torchsummary.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/gradcheck.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/__init__.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/init.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/collation.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  copying ./MinkowskiEngine/utils/summary.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/utils
  creating build/lib.linux-x86_64-3.7/MinkowskiEngine/modules
  copying ./MinkowskiEngine/modules/senet_block.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/modules
  copying ./MinkowskiEngine/modules/resnet_block.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/modules
  copying ./MinkowskiEngine/modules/__init__.py -> build/lib.linux-x86_64-3.7/MinkowskiEngine/modules
  running build_ext
  building 'MinkowskiEngineBackend._C' extension
  C compiler: gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC
  
  creating build/temp.linux-x86_64-3.7
  creating build/temp.linux-x86_64-3.7/tmp
  creating build/temp.linux-x86_64-3.7/tmp/pip-install-arjsv0pu
  creating build/temp.linux-x86_64-3.7/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7
  creating build/temp.linux-x86_64-3.7/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src
  creating build/temp.linux-x86_64-3.7/pybind
  compile options: '-I/opt/conda/lib/python3.7/site-packages/torch/include -I/opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.7/site-packages/torch/include/TH -I/opt/conda/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src -I/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/3rdparty -I/opt/conda/include/python3.7m -c'
  extra options: 'cxx nvcc'
  gcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/math_functions_cpu.cpp
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/math_functions_gpu.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_kernel.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_transpose_gpu.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/pooling_avg_kernel.cunvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_gpu.cu
  
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/pooling_max_kernel.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_kernel.cu(334): warning: integer conversion resulted in a change of sign
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_kernel.cu(573): warning: integer conversion resulted in a change of sign
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_cpu.hpp(58): warning: variable "float_type" was declared but never referenced
            detected during:
              instantiation of "std::pair<at::Tensor, at::Tensor> minkowski::CoordinateMapCPU<coordinate_type, TemplatedAllocator>::field_map(const coordinate_field_type *, minkowski::CoordinateMapCPU<coordinate_type, TemplatedAllocator>::size_type) const [with coordinate_type=int32_t, TemplatedAllocator=std::allocator, coordinate_field_type=float]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(329): here
              instantiation of "std::pair<at::Tensor, at::Tensor> minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::field_to_sparse_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=std::allocator, CoordinateMapType=minkowski::CoordinateMapCPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(1448): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=std::allocator, CoordinateMapType=minkowski::CoordinateMapCPU]"
  (1448): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(205): here
              instantiation of "minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>> minkowski::detail::kernel_map_functor<coordinate_type, TemplatedAllocator, minkowski::CoordinateMapGPU, minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>>>::operator()(const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, minkowski::CUDAKernelMapMode::Mode, minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(753): here
              instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(205): here
              instantiation of "minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>> minkowski::detail::kernel_map_functor<coordinate_type, TemplatedAllocator, minkowski::CoordinateMapGPU, minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>>>::operator()(const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, minkowski::CUDAKernelMapMode::Mode, minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(753): here
              instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(205): here
              instantiation of "minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>> minkowski::detail::kernel_map_functor<coordinate_type, TemplatedAllocator, minkowski::CoordinateMapGPU, minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>>>::operator()(const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, minkowski::CUDAKernelMapMode::Mode, minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(753): here
              instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(205): here
              instantiation of "minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>> minkowski::detail::kernel_map_functor<coordinate_type, TemplatedAllocator, minkowski::CoordinateMapGPU, minkowski::gpu_kernel_map<minkowski::type_wrapper<uint32_t, int32_t, float>::index_type, TemplatedAllocator<char>>>::operator()(const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator> &, minkowski::CUDAKernelMapMode::Mode, minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(753): here
              instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(625): here
              instantiation of "minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::self_type minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_region(minkowski::cpu_kernel_region<coordinate_type> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_type &) const [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(2460): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(625): here
              instantiation of "minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::self_type minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_region(minkowski::cpu_kernel_region<coordinate_type> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_type &) const [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(2460): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(625): here
              instantiation of "minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::self_type minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_region(minkowski::cpu_kernel_region<coordinate_type> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_type &) const [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(2460): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function from a __host__ __device__ function is not allowed
            detected during:
              instantiation of "minkowski::gpu_kernel_region<coordinate_type>::gpu_kernel_region(const minkowski::cpu_kernel_region<coordinate_type> &) [with coordinate_type=int32_t]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(625): here
              instantiation of "minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::self_type minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_region(minkowski::cpu_kernel_region<coordinate_type> &, const minkowski::CoordinateMapGPU<coordinate_type, TemplatedAllocator>::stride_type &) const [with coordinate_type=int32_t, TemplatedAllocator=minkowski::detail::default_allocator]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_gpu.cu(2460): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, __nv_bool, __nv_bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::c10_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(404): here
  
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/local_pooling_gpu.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_tensor_stride() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(393): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_kernel_size() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(393): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_dilation() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(394): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_offset() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(392): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_tensor_stride() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(393): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_kernel_size() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(393): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_dilation() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/kernel_region.hpp(394): warning: calling a __host__ function("minkowski::cpu_kernel_region<int> ::device_offset() const") from a __host__ __device__ function("minkowski::gpu_kernel_region<int> ::gpu_kernel_region") is not allowed
  
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/local_pooling_transpose_gpu.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_kernel.cu(334): warning: integer conversion resulted in a change of sign
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/convolution_kernel.cu(573): warning: integer conversion resulted in a change of sign
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, bool, bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=std::allocator, CoordinateMapType=minkowski::CoordinateMapCPU]"
  (1448): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, bool, bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::default_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(401): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp(717): warning: returning reference to local temporary
            detected during instantiation of "const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type &minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapKey *, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, const minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type &, minkowski::RegionType::Type, const at::Tensor &, bool, bool) [with coordinate_type=int32_t, coordinate_field_type=float, TemplatedAllocator=minkowski::detail::c10_allocator, CoordinateMapType=minkowski::CoordinateMapGPU]"
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu(404): here
  
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/global_pooling_gpu.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp: In instantiation of ‘const kernel_map_type& minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey*, const minkowski::CoordinateMapKey*, const stride_type&, const stride_type&, const stride_type&, minkowski::RegionType::Type, const at::Tensor&, bool, bool) [with coordinate_type = int; coordinate_field_type = float; TemplatedAllocator = std::allocator; CoordinateMapType = minkowski::CoordinateMapCPU; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type = minkowski::cpu_kernel_map; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type = std::vector<unsigned int, std::allocator<unsigned int> >]’:
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp:1448:16:   required from here
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp:717:260: warning: returning reference to temporary [-Wreturn-local-addr]
         return detail::empty_map_functor<coordinate_type, TemplatedAllocator,
                                                                                                                                                                                                                                                                      ^
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp: In instantiation of ‘const kernel_map_type& minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey*, const minkowski::CoordinateMapKey*, const stride_type&, const stride_type&, const stride_type&, minkowski::RegionType::Type, const at::Tensor&, bool, bool) [with coordinate_type = int; coordinate_field_type = float; TemplatedAllocator = minkowski::detail::default_allocator; CoordinateMapType = minkowski::CoordinateMapGPU; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type = minkowski::gpu_kernel_map<unsigned int, minkowski::detail::default_allocator<char> >; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type = std::vector<unsigned int, std::allocator<unsigned int> >]’:
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu:401:16:   required from here
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp:717:260: warning: returning reference to temporary [-Wreturn-local-addr]
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp: In instantiation of ‘const kernel_map_type& minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map(const minkowski::CoordinateMapKey*, const minkowski::CoordinateMapKey*, const stride_type&, const stride_type&, const stride_type&, minkowski::RegionType::Type, const at::Tensor&, bool, bool) [with coordinate_type = int; coordinate_field_type = float; TemplatedAllocator = minkowski::detail::c10_allocator; CoordinateMapType = minkowski::CoordinateMapGPU; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::kernel_map_type = minkowski::gpu_kernel_map<unsigned int, minkowski::detail::c10_allocator<char> >; minkowski::CoordinateMapManager<coordinate_type, coordinate_field_type, TemplatedAllocator, CoordinateMapType>::stride_type = std::vector<unsigned int, std::allocator<unsigned int> >]’:
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cu:404:16:   required from here
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/coordinate_map_manager.cpp:717:260: warning: returning reference to temporary [-Wreturn-local-addr]
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/broadcast_kernel.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/broadcast_gpu.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/pruning_gpu.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/interpolation_gpu.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/spmm.cu
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/gpu.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/gpu.cu(104): warning: function "minkowski::format_size" was declared but never referenced
  
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/quantization.cpp
  nvcc fatal   : Unknown option '-fopenmp'
  nvcc: /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/direct_max_pool.cpp
  nvcc fatal   : Unknown option '-fopenmp'
  nvcc: pybind/minkowski.cu
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/spmm.cu(102): warning: variable "is_int64" was declared but never referenced
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/spmm.cu(358): warning: variable "is_int64" was declared but never referenced
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/spmm.cu(465): warning: variable "num_unique_keys" was declared but never referenced
            detected during instantiation of "std::vector<at::Tensor, std::allocator<at::Tensor>> minkowski::coo_spmm_average<th_int_type>(const at::Tensor &, const at::Tensor &, int64_t, int64_t, const at::Tensor &, int64_t) [with th_int_type=int32_t]"
  (593): here
  
  /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/spmm.cu(465): warning: variable "num_unique_keys" was declared but never referenced
            detected during instantiation of "std::vector<at::Tensor, std::allocator<at::Tensor>> minkowski::coo_spmm_average<th_int_type>(const at::Tensor &, const at::Tensor &, int64_t, int64_t, const at::Tensor &, int64_t) [with th_int_type=int32_t]"
  (593): here
  
  error: Command "/usr/local/cuda/bin/nvcc -I/opt/conda/lib/python3.7/site-packages/torch/include -I/opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.7/site-packages/torch/include/TH -I/opt/conda/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src -I/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/3rdparty -I/opt/conda/include/python3.7m -c /tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/quantization.cpp -o build/temp.linux-x86_64-3.7/tmp/pip-install-arjsv0pu/minkowskiengine_1b30d99509b54b2eb496604f16599cf7/src/quantization.o -fopenmp -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14" failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for MinkowskiEngine
  Running setup.py clean for MinkowskiEngine
Failed to build MinkowskiEngine
Installing collected packages: pyasn1, zipp, typing-extensions, rsa, pyasn1-modules, oauthlib, multidict, frozenlist, cachetools, yarl, requests-oauthlib, pyparsing, MarkupSafe, importlib-metadata, google-auth, charset-normalizer, attrs, asynctest, async-timeout, aiosignal, werkzeug, tensorboard-plugin-wit, tensorboard-data-server, protobuf, packaging, markdown, grpcio, google-auth-oauthlib, fsspec, aiohttp, absl-py, tqdm, torchmetrics, tensorboard, PyYAML, pyDeprecate, pytorch-lightning, ninja, MinkowskiEngine, Click, mos4d
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.10.0.2
    Uninstalling typing-extensions-3.10.0.2:
      Successfully uninstalled typing-extensions-3.10.0.2
  Attempting uninstall: MarkupSafe
    Found existing installation: MarkupSafe 2.0.1
    Uninstalling MarkupSafe-2.0.1:
      Successfully uninstalled MarkupSafe-2.0.1
  Attempting uninstall: tqdm
    Found existing installation: tqdm 4.61.2
    Uninstalling tqdm-4.61.2:
      Successfully uninstalled tqdm-4.61.2
  Attempting uninstall: PyYAML
    Found existing installation: PyYAML 5.4.1
    Uninstalling PyYAML-5.4.1:
      Successfully uninstalled PyYAML-5.4.1
    Running setup.py install for MinkowskiEngine: started
    Running setup.py install for MinkowskiEngine: still running...
    Running setup.py install for MinkowskiEngine: finished with status 'done'
  DEPRECATION: MinkowskiEngine was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.
  Running setup.py develop for mos4d
Successfully installed Click-8.1.3 MarkupSafe-2.1.1 MinkowskiEngine-0.5.4 PyYAML-6.0 absl-py-1.2.0 aiohttp-3.8.3 aiosignal-1.2.0 async-timeout-4.0.2 asynctest-0.13.0 attrs-22.1.0 cachetools-5.2.0 charset-normalizer-2.1.1 frozenlist-1.3.1 fsspec-2022.8.2 google-auth-2.12.0 google-auth-oauthlib-0.4.6 grpcio-1.49.1 importlib-metadata-5.0.0 markdown-3.4.1 mos4d multidict-6.0.2 ninja-1.10.2.4 oauthlib-3.2.1 packaging-21.3 protobuf-3.19.6 pyDeprecate-0.3.2 pyasn1-0.4.8 pyasn1-modules-0.2.8 pyparsing-3.0.9 pytorch-lightning-1.7.7 requests-oauthlib-1.3.1 rsa-4.9 tensorboard-2.10.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 torchmetrics-0.10.0 tqdm-4.64.1 typing-extensions-4.3.0 werkzeug-2.2.2 yarl-1.8.1 zipp-3.8.1
Removing intermediate container 9f66deda49e7
Removing intermediate container e9d102faadec
 ---> bec9b29ccb30
Step 19/20 : RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
 ---> Running in e4c8f059ced8
Adding user `user' ...
Adding new user `user' (1006) with group `user' ...
Creating home directory `/home/user' ...
Copying files from `/etc/skel' ...
Removing intermediate container e4c8f059ced8
 ---> d55d1ceab527
Step 20/20 : USER user
 ---> Running in 5fb038b1388c
Removing intermediate container 5fb038b1388c
 ---> 090248160fe3

Successfully built 090248160fe3
Successfully tagged mos4d:latest

Inference problem

when try to use your model to infer,i encountered this error. My pytorchlightning version is 1.9.4.It's the version too high?I saw the requirement is >=1.6.4
IMG_4675

Predicted labels do not appear.

Hello,
Thank you for your work again. I would like to know how the test data annotated. I have annotated Kitti raw data using https://github.com/jbehley/point_labeler this tool and successfully trained my model. However, after testing the raw sequences and getting the labels when I open the predicted label in annotation tool, it does not give any prediction.

Screenshot from 2022-08-10 13-22-08

Just looks like everything is gray.

I have tried to change def to_label in confidences_to_label.py. But i have no luck. Can you please look into this ?
Thank you.

Deploying 4DMOS on Robots

Hi, 4DMOS is excellent!
I would like to ask how I can deploy 4DMOS on a robot for real-time dynamic object segmentation, so that I can optimize mapping and localization in real-time. Could you provide some methods or suggestions? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.