Giter Club home page Giter Club logo

grnet's Introduction

GRNet

This repository contains the source code for the paper GRNet: Gridding Residual Network for Dense Point Cloud Completion.

codebeat badge

Overview

Cite this work

@inproceedings{xie2020grnet,
  title={GRNet: Gridding Residual Network for Dense Point Cloud Completion},
  author={Xie, Haozhe and 
          Yao, Hongxun and 
          Zhou, Shangchen and 
          Mao, Jiageng and 
          Zhang, Shengping and 
          Sun, Wenxiu},
  booktitle={ECCV},
  year={2020}
}

Datasets

We use the ShapeNet, Compeletion3D, and KITTI datasets in our experiments, which are available below:

Pretrained Models

The pretrained models on ShapeNet are available as follows:

Prerequisites

Clone the Code Repository

git clone https://github.com/hzxie/GRNet.git

Install Python Denpendencies

cd GRNet
pip install -r requirements.txt

Build PyTorch Extensions

NOTE: PyTorch >= 1.4, CUDA >= 9.0 and GCC >= 4.9 are required.

GRNET_HOME=`pwd`

# Chamfer Distance
cd $GRNET_HOME/extensions/chamfer_dist
python setup.py install --user

# Cubic Feature Sampling
cd $GRNET_HOME/extensions/cubic_feature_sampling
python setup.py install --user

# Gridding & Gridding Reverse
cd $GRNET_HOME/extensions/gridding
python setup.py install --user

# Gridding Loss
cd $GRNET_HOME/extensions/gridding_loss
python setup.py install --user

Preprocess the ShapeNet dataset

cd $GRNET_HOME/utils
python lmdb_serializer.py /path/to/shapenet/train.lmdb /path/to/output/shapenet/train
python lmdb_serializer.py /path/to/shapenet/valid.lmdb /path/to/output/shapenet/val

You can download the processed ShapeNet dataset here.

Update Settings in config.py

You need to update the file path of the datasets:

__C.DATASETS.COMPLETION3D.PARTIAL_POINTS_PATH    = '/path/to/datasets/Completion3D/%s/partial/%s/%s.h5'
__C.DATASETS.COMPLETION3D.COMPLETE_POINTS_PATH   = '/path/to/datasets/Completion3D/%s/gt/%s/%s.h5'
__C.DATASETS.SHAPENET.PARTIAL_POINTS_PATH        = '/path/to/datasets/ShapeNet/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd'
__C.DATASETS.SHAPENET.COMPLETE_POINTS_PATH       = '/path/to/datasets/ShapeNet/ShapeNetCompletion/%s/complete/%s/%s.pcd'
__C.DATASETS.KITTI.PARTIAL_POINTS_PATH           = '/path/to/datasets/KITTI/cars/%s.pcd'
__C.DATASETS.KITTI.BOUNDING_BOX_FILE_PATH        = '/path/to/datasets/KITTI/bboxes/%s.txt'

# Dataset Options: Completion3D, ShapeNet, ShapeNetCars, KITTI
__C.DATASET.TRAIN_DATASET                        = 'ShapeNet'
__C.DATASET.TEST_DATASET                         = 'ShapeNet'

Get Started

To train GRNet, you can simply use the following command:

python3 runner.py

To test GRNet, you can use the following command:

python3 runner.py --test --weights=/path/to/pretrained/model.pth

License

This project is open sourced under MIT license.

grnet's People

Contributors

hzxie avatar yuxumin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

grnet's Issues

Mapping to the KITTI raw dataset

Hi @hzxie

Thanks for the great work !!

Can you please share the mapping of your processed KITTI dataset to the KITTI RAW dataset.
Basically I want to know, which log/ city/ frame each object is coming from.

Thanks !!

Question: Why the values of vertices are larger than 1?

I used the command to display the value of the grid, and observed that many values will be greater than 1, which is different from (2) in the paper, I think N (vi) is not divided to cause w to be greater than one, or am I misunderstand about (2) (I think its value will be between 0~1)?

Questions about the Evaluation on KITTI Dataset

HI, thx for the amazing work. I'd like to inquire about the evaluation process of the metric consistency. I have obtained a much larger value than the result in your paper. It seems that if one of the inputs of the consistency metric is the result generated from very sparse points, the consistency value can be higher than 10. So how did you process these very sparse frames, or did you conduct any preprocessing before evaluation?

Running on win10

Hello
Thanks a lot for this great repo
I'm trying to run the pretrained model of your work on win10 but I get multiple errors.
I wanted to know if it's even possible to run it on win10 or my machine has problems.
Thanks

Unable to run on KITTI

I have everything working fine in the ShapeNet dataset, but if I switch to KITTI it doesn't work. I have followed the steps to install dependencies and modify config.py.

python3 runner.py --test --weights=/home/skewen/GRNet/GRNet-KITTI.pth

Traceback (most recent call last): File "runner.py", line 76, in <module> main() File "runner.py", line 65, in main test_net(cfg) File "/home/skewen/GRNet/core/test.py", line 71, in test_net sparse_loss = chamfer_dist(sparse_ptcloud, data['gtcloud']) KeyError: 'gtcloud'

chamfer.o No such file or directory

hello, when I run the chamfer_dist, I met the problem "GRNet/extensions/chamfer_dist/build/temp.linux-x86_64-3.7/chamfer.o: no such file or directory" , how can I solve this problem?

HOW do you process the data for training? what is the normalization?

I tried to use my own data to train your model. I ran into the problem as:

File "/home/server/PCSR_v2/GRNet_network.py", line 115, in forward
pt_features_64_l = self.gridding(partial_cloud).view(batch_size, 1, 64, 64, 64)
File "/home/server/miniconda3/envs/point2mesh/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/server/PCSR_v2/extensions/gridding/init.py", line 45, in forward
p = p[non_zeros].unsqueeze(dim=0)
RuntimeError: copy_if failed to synchronize: an illegal memory access was encountered

I assume that it is due to the normalization.

How to reproduce the experiment of shape completion on KITTI?

To reproduce the experiment of shape completion on KITTI, we just need to update settings in config.py:

# Dataset Options: Completion3D, ShapeNet, ShapeNetCars, KITTI
__C.DATASET.TRAIN_DATASET                        = 'ShapeNetCars'
__C.DATASET.TEST_DATASET                         = 'KITTI'

Then run python3 runner.py.

Am I right?

By the way, how to save and visualize the completion results on KITTI as shown in your paper?

How to evaluate GRNet on Completion 3d Benchmark?

Hi,
Thanks for your great work,and I saw the great result of the GRNet on the Leaderboard of Completion 3d Benchmark.

When I tried to evaluate GRNet on the completion 3d benchmark, I found some difficulties, such as the version of cuda and pytorch. Completion 3d benchmark uses cuda9.0 and pytorch0.4.0.

Could you please give me some advice ?

Thanks a lot!

usage of gridding_loss

Hi, thanks for the amazing work.

I check the "train.py" code, but it seems not using "gridding_loss", is there any reason for that? Or I just need to add the "gridding_loss" in the "_loss".

Thanks in advance

bbox.txt DataFormat

hello,I am a newer for deep learning. I want to know the bbox.txt data format, thank you a lot!

the problem in ruinng python setup.py install --user

hello, when I python setup.py installer -user,it has some errors.
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/boxing/impl/boxing.h:268:122: error: expansion pattern 'c10::impl::can_box' contains no argument packs
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/boxing/impl/boxing.h:268:149: error: template argument 1 is invalid
/usr/local/lib/python3.6/dist-packaages/torch/include/c10/util/variant.h:2395:107: error: expansion pattern 'Arg&&' contains no argument packs
error: could not convert '0' from 'int' to 'torch::nn::functional::ConvFuncOptions<2ul>::padding_t {aka c10::variant<torch::ExpandingArray<2ul, long int>, torch::enumtype::kValid, torch::enumtype::kSame>}'
const Conv2dFuncOptions& options = {}) {
/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/functional/conv.h:161:13:

error: could not convert 'torch::kLeakyReLU' from 'const torch::enumtype::kLeakyReLU' to 'torch::nn::init::NonlinearityType {aka c10::variant<torch::enumtype::kLinear, torch::enumtype::kConv1D, torch::enumtype::kConv2D, torch::enumtype::kConv3D, torch::enumtype::kConvTranspose1D, torch::enumtype::kConvTranspose2D, torch::enumtype::kConvTranspose3D, torch::enumtype::kSigmoid, torch::enumtype::kTanh, torch::enumtype::kReLU, torch::enumtype::kLeakyReLU>}'
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
How can I solve it?

Bugs in eithor the lmdb datasets or the serialization script?

I downloaded the ShapeNet dataset in lmdb format and tried to organize it into classname/modelname folders by using the script in utils/

cd $GRNET_HOME/utils
python lmdb_serializer.py /path/to/shapenet/train.lmdb /path/to/output/shapenet/train
python lmdb_serializer.py /path/to/shapenet/valid.lmdb /path/to/output/shapenet/val

While I got errors as follows:

$python lmdb_serializer.py ../data/ShapeNetCompletionZZZ/train.lmdb ../data/ShapeNetCompletionZZZ/
Failed to import tensorflow.
Traceback (most recent call last):
  File "lmdb_serializer.py", line 48, in <module>
    main()
  File "lmdb_serializer.py", line 25, in main
    df = dataflow.LMDBSerializer.load(lmdb_file_path, shuffle=False)
  File "/home/xieyunwei/.conda/envs/sparenet/lib/python3.7/site-packages/tensorpack/dataflow/serialize.py", line 114, in load
    df = LMDBData(path, shuffle=shuffle)
  File "/home/xieyunwei/.conda/envs/sparenet/lib/python3.7/site-packages/tensorpack/dataflow/format.py", line 90, in __init__
    self._set_keys(keys)
  File "/home/xieyunwei/.conda/envs/sparenet/lib/python3.7/site-packages/tensorpack/dataflow/format.py", line 110, in _set_keys
    self.keys = loads(self.keys)
  File "/home/xieyunwei/.conda/envs/sparenet/lib/python3.7/site-packages/tensorpack/utils/serialize.py", line 93, in loads
    return pickle.loads(buf)
_pickle.UnpicklingError: invalid load key, '\xdd'.

Is this because of the Chinese characters in uploaded train/valid.lmdb datasets? Maybe lmdb in python cannot process such characters? Anyway this needs fix.

Cannot train with gridding loss!

I replace the chamfer distance into gridding loss. No matter how big my GPU is, it will report an error after 4 or 5 iterations of the first epoch:

Traceback (most recent call last):
  File "runner.py", line 76, in <module>
    main()
  File "runner.py", line 58, in main
    train_net(cfg)
  File "/root/autodl-tmp/code/GRNet/core/train.py", line 115, in train_net
    dense_loss = gridding_loss(dense_ptcloud, data['gtcloud'])
  File "/root/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/autodl-tmp/code/GRNet/extensions/gridding_loss/__init__.py", line 107, in forward
    pred_grid, gt_grid = gdist(pred_cloud, gt_cloud)
  File "/root/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/autodl-tmp/code/GRNet/extensions/gridding_loss/__init__.py", line 89, in forward
    return torch.cat(pred_grids, dim=0).contiguous(), torch.cat(gt_grids, dim=0).contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 2.34 GiB (GPU 0; 11.91 GiB total capacity; 10.18 GiB already allocated; 106.94 MiB free; 11.21 GiB reserved in total by PyTorch)

And I found that when using multi GPU training, the GPU‘s load is not uniform. The first GPU will have the maximum load, and the load of other GPUs is very small, resulting in out of memory in the first GPU.

When I use the chamfer distance, everything is OK.

GPU

My environment is Python 3.6.13, CUDA 10.1.234, PyTorch 1.6.0, 4 TITAN XP GPUs.

How to evaluate the uniformity on KITTI dataset?

In your paper, there are two methods (Consistency, Uniformity) to evaluate the output point clouds from the net. But I haven't found those codes. Can you tell me where are the codes so I can evaluate the output point clouds?
Thank you in advance.

Error in Extensions

I try to Build Pytorch Extensions..
I've tried changing the Cuda and torch versions several times. Failed to Build Extension.
My enviroment is PyTorch = 1.7.0, CUDA = 11.0 and GCC = 6.3.0

thanks for check my issue.
errors occur as follows:

(grnet_latest) C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist>python setup.py install --user 
running install
C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\easy_install.py:156: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing chamfer.egg-info\PKG-INFO
writing dependency_links to chamfer.egg-info\dependency_links.txt
writing top-level names to chamfer.egg-info\top_level.txt
reading manifest file 'chamfer.egg-info\SOURCES.txt'
writing manifest file 'chamfer.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py:274: UserWarning: Error checking compiler version for cl: [WinError 2] 지정된 파일을 찾을 수 없습니다
  warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
building 'chamfer' extension
Emitting ninja build file C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist\build\temp.win-amd64-3.8\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\TH -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -IC:\Users\USER\anaconda3\envs\grnet_latest\include -IC:\Users\USER\anaconda3\envs\grnet_latest\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist\chamfer.cu -o C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist\build\temp.win-amd64-3.8\Release\chamfer.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86
FAILED: C:/Users/USER/Desktop/GRNet/extensions/chamfer_dist/build/temp.win-amd64-3.8/Release/chamfer.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\TH -IC:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -IC:\Users\USER\anaconda3\envs\grnet_latest\include -IC:\Users\USER\anaconda3\envs\grnet_latest\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist\chamfer.cu -o C:\Users\USER\Desktop\GRNet\extensions\chamfer_dist\build\temp.win-amd64-3.8\Release\chamfer.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 
nvcc fatal   : Unsupported gpu architecture 'compute_86'
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py", line 1516, in _run_ninja_build
    subprocess.run(
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 11, in <module>
    setup(name='chamfer',
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\core.py", line 148, in setup
    dist.run_commands()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\install.py", line 74, in run
    self.do_egg_install()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\install.py", line 116, in do_egg_install
    self.run_command('bdist_egg')
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\bdist_egg.py", line 164, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\bdist_egg.py", line 150, in call_command
    self.run_command(cmdname)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
    self.build()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\command\install_lib.py", line 107, in build
    self.run_command('build_ext')
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\dist.py", line 985, in run_command
    cmd_obj.run()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\build_ext.py", line 79, in run
    _build_ext.run(self)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\command\build_ext.py", line 340, in run
    self.build_extensions()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py", line 653, in build_extensions
    build_ext.build_extensions(self)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\command\build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\command\build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\setuptools\command\build_ext.py", line 202, in build_extension
    _build_ext.build_extension(self, ext)
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\distutils\command\build_ext.py", line 528, in build_extension
    objects = self.compiler.compile(sources,
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py", line 626, in win_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "C:\Users\USER\anaconda3\envs\grnet_latest\lib\site-packages\torch\utils\cpp_extension.py", line 1538, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

How can I solve this problem?
or Can I get a file related to extension ? ex) chamfer.py etc..
or Another Way to Build extension without present process

error:: an illegal memory access was encountered

Hi,when i run this code with my own dataset,it errors as:

Use config:
{'CONST': {'DEVICE': '0', 'NUM_WORKERS': 0, 'N_INPUT_POINTS': 2048},
 'DATASET': {'TEST_DATASET': 'Completion3D', 'TRAIN_DATASET': 'Completion3D'},
 'DATASETS': {'COMPLETION3D': {'CATEGORY_FILE_PATH': './datasets/Completion3D.json',
                               'COMPLETE_POINTS_PATH': '/media/yaogan_504/5E4006134005F297/linhemin/GRNet-master/datasets/Completion3D/%s/gt/%s/%s.h5',
                               'PARTIAL_POINTS_PATH': '/media/yaogan_504/5E4006134005F297/linhemin/GRNet-master/datasets/Completion3D/%s/partial/%s/%s.h5'},
              'KITTI': {'BOUNDING_BOX_FILE_PATH': '/home/SENSETIME/xiehaozhe/Datasets/KITTI/bboxes/%s.txt',
                        'CATEGORY_FILE_PATH': './datasets/KITTI.json',
                        'PARTIAL_POINTS_PATH': '/home/SENSETIME/xiehaozhe/Datasets/KITTI/cars/%s.pcd'},
              'SHAPENET': {'CATEGORY_FILE_PATH': './datasets/ShapeNet.json',
                           'COMPLETE_POINTS_PATH': '/home/SENSETIME/xiehaozhe/Datasets/ShapeNet/ShapeNetCompletion/%s/complete/%s/%s.pcd',
                           'N_POINTS': 16384,
                           'N_RENDERINGS': 8,
                           'PARTIAL_POINTS_PATH': '/home/SENSETIME/xiehaozhe/Datasets/ShapeNet/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd'}},
 'DIR': {'OUT_PATH': './output'},
 'MEMCACHED': {'CLIENT_CONFIG': '/mnt/lustre/share/memcached_client/client.conf',
               'ENABLED': False,
               'LIBRARY_PATH': '/mnt/lustre/share/pymc/py3',
               'SERVER_CONFIG': '/mnt/lustre/share/memcached_client/server_list.conf'},
 'NETWORK': {'GRIDDING_LOSS_ALPHAS': [0.1],
             'GRIDDING_LOSS_SCALES': [128],
             'N_SAMPLING_POINTS': 2048},
 'TEST': {'METRIC_NAME': 'ChamferDistance'},
 'TRAIN': {'BATCH_SIZE': 1,
           'BETAS': [0.9, 0.999],
           'GAMMA': 0.5,
           'LEARNING_RATE': 0.0001,
           'LR_MILESTONES': [50],
           'N_EPOCHS': 150,
           'SAVE_FREQ': 25,
           'WEIGHT_DECAY': 0}}
[INFO] 2021-01-04 11:01:13,525 Collecting files of Taxonomy [ID=all, Name=Uncategorized Test Set]
[INFO] 2021-01-04 11:01:13,529 Collecting files of Taxonomy [ID=02691156, Name=classic]
[INFO] 2021-01-04 11:01:13,532 Collecting files of Taxonomy [ID=02933112, Name=other]
[INFO] 2021-01-04 11:01:13,533 Complete collecting files of the dataset. Total files: 104
[INFO] 2021-01-04 11:01:13,534 Collecting files of Taxonomy [ID=all, Name=Uncategorized Test Set]
[INFO] 2021-01-04 11:01:13,535 Collecting files of Taxonomy [ID=02691156, Name=classic]
[INFO] 2021-01-04 11:01:13,537 Collecting files of Taxonomy [ID=02933112, Name=other]
[INFO] 2021-01-04 11:01:13,538 Complete collecting files of the dataset. Total files: 14
[DEBUG] 2021-01-04 11:01:14,724 Parameters in GRNet: 76707626.
[INFO] 2021-01-04 11:01:19,336 [Epoch 1/150][Batch 1/104] BatchTime = 0.697 (s) DataTime = 0.049 (s) Losses = ['535.0128', '533.6913']
[INFO] 2021-01-04 11:01:19,450 [Epoch 1/150][Batch 2/104] BatchTime = 0.114 (s) DataTime = 0.020 (s) Losses = ['581.4405', '579.0204']
[INFO] 2021-01-04 11:01:19,598 [Epoch 1/150][Batch 3/104] BatchTime = 0.147 (s) DataTime = 0.055 (s) Losses = ['758.7496', '758.8049']
[INFO] 2021-01-04 11:01:19,695 [Epoch 1/150][Batch 4/104] BatchTime = 0.098 (s) DataTime = 0.006 (s) Losses = ['695.8061', '692.7615']
[INFO] 2021-01-04 11:01:19,793 [Epoch 1/150][Batch 5/104] BatchTime = 0.097 (s) DataTime = 0.006 (s) Losses = ['554.6122', '544.2510']
[INFO] 2021-01-04 11:01:19,931 [Epoch 1/150][Batch 6/104] BatchTime = 0.138 (s) DataTime = 0.044 (s) Losses = ['539.5575', '530.9702']
[INFO] 2021-01-04 11:01:20,071 [Epoch 1/150][Batch 7/104] BatchTime = 0.141 (s) DataTime = 0.048 (s) Losses = ['556.2327', '553.9484']
[INFO] 2021-01-04 11:01:20,171 [Epoch 1/150][Batch 8/104] BatchTime = 0.099 (s) DataTime = 0.006 (s) Losses = ['682.0801', '675.5230']
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=77 : an illegal memory access was encountered

Did you meet the issue before? Thanks for advance.

gridding_loss is not applied during training?

Thanks for your interesting work, I noticed that gridding_loss is not really included in the training stage in your script although it's claimed in your paper, so your model is only optimised with chamfer distance for both coarse and dense point cloud? did I miss some details in your codes?

The experimental results

after complete the training and save the model,I did not find the completion result, how should I see the completion result.thank you

Questions about the Shapenet dataset normalisation and gridding operation.

Thank you for the great paper and code!!

I have a few questions:

  • I noticed that the precomputed ShapeNet dataset seems to be normalised in the range (-0.5, 0.5) rather than (-1, 1) and the pointclouds after gridding only occupy a [32, 32, 32] subset of the [64, 64, 64] grid. Is that expected? When I tried to pass on pointclouds normalised in the range (-1, 1) I got an error (I'm not sure it's related).
  • From the paper, I would have expected the grid vertices (the output of the Gridding layer) to have values in the range 0-1, but I see much larger values. Did I misunderstand the gridding operation?

Thank you very much

Errors in Building PyTorch Extensions

HI, thx for the amazing work.
But when I run : python setup.py install --user
errors occur as follows:

QQ截图20200709154447

This error may be caused by: cubic_feature_sampling.cu>>>>>>>>>ptcloud.data_ptr(), cubic_features.data_ptr()

How can I solve this problem?

I USE PyTorch = 1.2.0, CUDA = 10.1 and GCC = 5.4

Error in gridding_cuda_forward: invalid device function

hello sir,
I just test the function gridding, but failed runing:

environment: pytorch 1.5.1, cuda 10.1, gcc 5.4, python 3.6.9

code:

gridding = Gridding(scale=64)
train_on_gpu = torch.cuda.is_available()
device = torch.device("cuda:0" if train_on_gpu else "cpu")
gridding = gridding.to(device)

partial_cloud = data['partial_cloud']                      # 2048, 3
partial_cloud = partial_cloud.unsqueeze(0)           # 1, 2048, 3
pt_features_64_l = gridding(partial_cloud).view(-1, 1, 64, 64, 64)

Error:
Error in gridding_cuda_forward: invalid device function

Error in preprocessing the ShapeNet Dataset

when I run : python lmdb_serializer.py /path/to/shapenet/valid.lmdb /path/to/output/shapenet/val
errors occur as follows:
QQ截图20200709222717
This error may be due to a module name conflict. I guess it's /utils/ io.py Conflicts with Python internal file naming.
How can I solve this problem?

EMD module giving error

When I run the inference script with the following command , python3 test.py --gpu 0 --workdir ./ --model sparenet --weights SpareNet.pth --test_mode defaul
I get this error:

emd.forward( AttributeError: module 'emd' has no attribute 'forward' (torchenv) sarkar@devcube:~/Github/SpareNet$

My environment is as follows:
CUDA Version: 11.6
Python 3.8.10
PyTorch 1.13.0+cu116

Cannot reproduce the results reported in the Paper (CD=2.723)

You need to train the whole network with Chamfer Distance. It reaches CD ~0.40 on ShapeNet.
Then, you need to fine-tune the network with Gridding Loss + Chamfer Distance on the Coarse Point Cloud.
Finally, you fine-tune the network with Chamfer Distance. Chamfer Distance is taken as a metric, therefore, you cannot get lower CD without using Chamfer Distance as a loss.

Originally posted by @hzxie in #3 (comment)

Cannot allocate memory

when I train with shapenet dataset, I meet this error.

##########
File "runner.py", line 76, in
main()
File "runner.py", line 58, in main
train_net(cfg)
File "/export/chenyue/GRNet/core/train.py", line 107, in train_net
for batch_idx, (taxonomy_ids, model_ids, data) in enumerate(train_data_loader):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 291, in iter
return _MultiProcessingDataLoaderIter(self)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 737, in init
w.start()
File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 70, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
###########

My server have 216 GB memory.

[Open3D WARNING] Read PCD failed: unable to open file

hello, author. When i tried to python runner.py, there were several files in the dataset that couldn't be opened. I have the following code:

Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04379243/c7872212279d59eb540291e94bc8ddc3/01.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04256520/f67714d13805df294b3c42e318f3affc/07.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04379243/c7872212279d59eb540291e94bc8ddc3.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04256520/f67714d13805df294b3c42e318f3affc.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04256520/5f5c4e66f07fc2695c0be177939e290/04.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04256520/5f5c4e66f07fc2695c0be177939e290.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/02958343/81c52d54f9719736ce27281f3b76d1f5/03.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/02691156/e70bd95ab764bc1b3465be15e1aa6a0c/04.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/02958343/81c52d54f9719736ce27281f3b76d1f5.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/02691156/e70bd95ab764bc1b3465be15e1aa6a0c.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04256520/11be630221243013c087ef7d7cf00301/01.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04256520/11be630221243013c087ef7d7cf00301.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/02958343/3d681ed2a7eef0df28f46021c78a3723/06.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/02958343/3d681ed2a7eef0df28f46021c78a3723.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04379243/d03256544371f1eafa6e1fd63f4a1c35/01.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04379243/d03256544371f1eafa6e1fd63f4a1c35.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/03636649/1ce6fb24e634d5962a510b8f97c658e/00.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/04256520/48228cf2207c7af5892eaa162d1e35d/01.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/partial/02933112/9dcc7002210e6660824662341ce2b233/06.pcd
[Open3D WARNING] Read PCD failed: unable to open file: /home/FaceSegmentation/GRNet/datasets/ShapeNet/ShapeNetCompletion/train/complete/04256520/48228cf2207c7af5892eaa162d1e35d.pcd
...

and

File "/home/rmclab1003/FaceSegmentation/GRNet/models/grnet.py", line 138, in forward
sparse_cloud = self.point_sampling(sparse_cloud, partial_cloud)
File "/home/rmclab1003/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in call_impl
result = self.forward(*input, **kwargs)
File "/home/rmclab1003/FaceSegmentation/GRNet/models/grnet.py", line 30, in forward
rnd_idx = torch.cat([torch.randint(0, n_pts, (self.n_points, ))])
RuntimeError: random
expects 'from' to be less than 'to', but got from=0 >= to=0

Do you happen to know why? please give me some pointers, thanks!

error in 'pip install -r requirements.txt'

When I wrote this, this error came up.


Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Collecting easydict
Downloading easydict-1.10.tar.gz (6.4 kB)
Preparing metadata (setup.py) ... done
Collecting h5py
Downloading h5py-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.7/4.7 MB 30.9 MB/s eta 0:00:00
Collecting matplotlib==3.0.3
Downloading matplotlib-3.0.3.tar.gz (36.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.6/36.6 MB 28.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [50 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-install-iys51m99/matplotlib_75d929524e954c3dba9e69af5af6bfc2/setup.py", line 225, in
msg = pkg.install_help_msg()
File "/tmp/pip-install-iys51m99/matplotlib_75d929524e954c3dba9e69af5af6bfc2/setupext.py", line 650, in install_help_msg
release = platform.linux_distribution()[0].lower()
AttributeError: module 'platform' has no attribute 'linux_distribution'
============================================================================
Edit setup.cfg to change the build options

  BUILDING MATPLOTLIB
              matplotlib: yes [3.0.3]
                  python: yes [3.9.13 (main, Oct 13 2022, 21:15:33)  [GCC
                          11.2.0]]
                platform: yes [linux]
  
  REQUIRED DEPENDENCIES AND EXTENSIONS
                   numpy: yes [version 1.24.2]
        install_requires: yes [handled by setuptools]
                  libagg: yes [pkg-config information for 'libagg' could not
                          be found. Using local copy.]
                freetype: no  [The C/C++ header for freetype2 (ft2build.h)
                          could not be found.  You may need to install the
                          development package.]
                     png: no  [The C/C++ header for libpng (png.h) could not
                          be found.  You may need to install the development
                          package.]
                   qhull: yes [pkg-config information for 'libqhull' could not
                          be found. Using local copy.]
  
  OPTIONAL SUBPACKAGES
             sample_data: yes [installing]
                toolkits: yes [installing]
                   tests: no  [skipping due to configuration]
          toolkits_tests: no  [skipping due to configuration]
  
  OPTIONAL BACKEND EXTENSIONS
                     agg: yes [installing]
                   tkagg: yes [installing; run-time loading from Python Tcl /
                          Tk]
                  macosx: no  [Mac OS-X only]
               windowing: no  [Microsoft Windows only]
  
  OPTIONAL PACKAGE DATA
                    dlls: no  [skipping due to configuration]
  
  ============================================================================
                          * The following required packages can not be built:
                          * freetype, png
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details._

So, i searched for a solution. and then, I found 'pip install -r requirements.txt --use-deprecated=legacy-resolver'.
But it didn't work, and I got this error.

Collecting argparse
Using cached argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Collecting easydict
Using cached easydict-1.10.tar.gz (6.4 kB)
Preparing metadata (setup.py) ... done
Collecting h5py
Using cached h5py-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB)
Collecting matplotlib==3.0.3
Using cached matplotlib-3.0.3.tar.gz (36.6 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [50 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-install-t3yrupt7/matplotlib/setup.py", line 225, in
msg = pkg.install_help_msg()
File "/tmp/pip-install-t3yrupt7/matplotlib/setupext.py", line 650, in install_help_msg
release = platform.linux_distribution()[0].lower()
AttributeError: module 'platform' has no attribute 'linux_distribution'
============================================================================
Edit setup.cfg to change the build options

  BUILDING MATPLOTLIB
              matplotlib: yes [3.0.3]
                  python: yes [3.9.13 (main, Oct 13 2022, 21:15:33)  [GCC
                          11.2.0]]
                platform: yes [linux]
  
  REQUIRED DEPENDENCIES AND EXTENSIONS
                   numpy: yes [version 1.24.2]
        install_requires: yes [handled by setuptools]
                  libagg: yes [pkg-config information for 'libagg' could not
                          be found. Using local copy.]
                freetype: no  [The C/C++ header for freetype2 (ft2build.h)
                          could not be found.  You may need to install the
                          development package.]
                     png: no  [The C/C++ header for libpng (png.h) could not
                          be found.  You may need to install the development
                          package.]
                   qhull: yes [pkg-config information for 'libqhull' could not
                          be found. Using local copy.]
  
  OPTIONAL SUBPACKAGES
             sample_data: yes [installing]
                toolkits: yes [installing]
                   tests: no  [skipping due to configuration]
          toolkits_tests: no  [skipping due to configuration]
  
  OPTIONAL BACKEND EXTENSIONS
                     agg: yes [installing]
                   tkagg: yes [installing; run-time loading from Python Tcl /
                          Tk]
                  macosx: no  [Mac OS-X only]
               windowing: no  [Microsoft Windows only]
  
  OPTIONAL PACKAGE DATA
                    dlls: no  [skipping due to configuration]
  
  ============================================================================
                          * The following required packages can not be built:
                          * freetype, png
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> matplotlib

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

i tried 'pip install -r requirements.txt --use-deprecated=backtrack-on-build-failures' But, it didn't work, too.
How can i solve it? thank you!

copy_if failed to synchronize: an illegal memory access was encountered

in this function:

class Gridding(torch.nn.Module):
    def __init__(self, scale=1):
        super(Gridding, self).__init__()
        self.scale = scale // 2

    def forward(self, ptcloud):
        ptcloud = ptcloud * self.scale
        _ptcloud = torch.split(ptcloud, 1, dim=0)
        grids = []
        for p in _ptcloud:
            non_zeros = torch.sum(p, dim=2).ne(0)
            p = p[non_zeros].unsqueeze(dim=0)

alwayes appears "copy_if failed to synchronize: an illegal memory access was encountered"
any suggestions?

can't successfully compile the gridding

I can compile the gridding part with cuda10.0, but not cuda10.1. Cuda10.0 can't work with pytorch>=1.4. Could you tell me the exact versions of your pytorch and cuda?

error: no instance of function template "c10::visit"

Hi,
when i run the setup.py in the ./extensions/chamfer_dist/, it errors as:

"/home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list argument types are: (torch::enumtype::_compute_enum_name, torch::nn::KLDivLossOptions::reduction_t) detected during: instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::KLDivLossOptions::reduction_t]" (176): here instantiation of "at::Reduction::Reduction torch::enumtype::reduction_get_enum(V) [with V=torch::nn::KLDivLossOptions::reduction_t]" /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/loss.h(49): here /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list argument types are: (torch::enumtype::_compute_enum_name, torch::nn::MultiLabelSoftMarginLossOptions::reduction_t) detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::MultiLabelSoftMarginLossOptions::reduction_t]" /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/loss.h(336): here /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list argument types are: (torch::enumtype::_compute_enum_name, torch::nn::functional::PadFuncOptions::mode_t) detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::functional::PadFuncOptions::mode_t]" /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(40): here /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list argument types are: (torch::enumtype::_compute_enum_name, torch::nn::functional::InterpolateFuncOptions::mode_t) detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::functional::InterpolateFuncOptions::mode_t]" /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/upsampling.h(103): here /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list argument types are: (torch::enumtype::_compute_enum_name, torch::nn::detail::conv_padding_mode_t) detected during: instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::detail::conv_padding_mode_t]" /home/ynn/anaconda3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h(95): here instantiation of "void torch::nn::ConvNdImpl<D, Derived>::pretty_print(std::ostream &) const [with D=1UL, Derived=torch::nn::Conv1dImpl]"
6 errors detected in the compilation of "/tmp/tmpxft_00001b8a_00000000-6_chamfer.cpp1.ii". error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1 "

Did you meet the problem before? How to solve it? Thanks in advance.

KeyError:'gtcloud'

Hi,
when i run the python runner.py --test --weights=../GRNet-KITTI.pth, i got KeyError:'gtcloud' in /core/test.py

How to fix it? Thanks.

How can I turn off these debug logs?

hi @hzxie, thanks for your great contribution. When i'm training your model, i find these duplicated prints

Format = auto
Extension = pcd
Format = auto
Extension = pcd
Format = auto
Extension = pcd
Format = auto
Extension = pcd
Format = auto
Extension = pcd

I'm new with this, could you pls tell me how to turn it off?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.