This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.
WARNING: This code is not being actively maintained. This code can be used to reproduce the results in the first version of the paper, https://arxiv.org/abs/1812.05784v1. For an actively maintained repository that can also reproduce PointPillars results on nuScenes, we recommend using SECOND. We are not the owners of the repository, but we have worked with the author and endorse his code.
Then use pip for the packages missing from Anaconda.
pip install --upgrade pip
pip install fire tensorboardX
Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this
to be correctly configured. However, I suggest you install the spconv instead of SparseConvNet.
git clone [email protected]:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead
Additionally, you may need to install Boost geometry:
sudo apt-get install libboost-all-dev
3. Setup cuda for numba
You need to add following environment variables for numba to ~/.bashrc:
Hi, i have a stupid question.
Can anyone explain the input shapes to the pfe.onnx an rpn.onnx. I have difficulty understanding how to apply it for my point clouds. So what i see is that i somehow need to create pillars and shape x,y,z to individual [1, 1, 12000, 100], and i dont see how it corresponds to the decsribed tensor (D, P,N) with 9 features D, pillars P and points per pillar N. Furthermore, what is the output of pfe.onnx and and most importantly the input and output of rpn.onnx. Just not managing to relate the given neural network shapes to the expected bounding boxes.
I am currently working on converting trained PointPillars NuScenes Model (SECOND lastest code) to ONNX. However, there is error when onnx_module_generate:
size mismatch for rpn.conv_cls.weight: copying a param with shape torch.Size([200, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for rpn.conv_cls.bias: copying a param with shape torch.Size([200]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for rpn.conv_box.weight: copying a param with shape torch.Size([140, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
size mismatch for rpn.conv_box.bias: copying a param with shape torch.Size([140]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for rpn.conv_dir_cls.weight: copying a param with shape torch.Size([40, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 384, 1, 1]).
size mismatch for rpn.conv_dir_cls.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([4]).
Any suggestions or overall guideline would be appricated!
$ python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 237, in initialize
self.cuInit(0)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 300, in safe_cuda_api_call
self._check_error(fname, retcode)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 335, in _check_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [999] Call to cuInit results in CUDA_ERROR_UNKNOWN
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "create_data.py", line 9, in
from second.core import box_np_ops
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/box_np_ops.py", line 7, in
from second.core.non_max_suppression.nms_gpu import rotate_iou_gpu_eval
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/non_max_suppression/init.py", line 2, in
from second.core.non_max_suppression.nms_gpu import (nms_gpu, rotate_iou_gpu,
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/non_max_suppression/nms_gpu.py", line 36, in
@cuda.jit('(int64, float32, float32[:, :], uint64[:])')
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/decorators.py", line 95, in kernel_jit
return Dispatcher(func, [func_or_sig], targetoptions=targetoptions)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 899, in init
self.compile(sigs[0])
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 1102, in compile
kernel.bind()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 590, in bind
self._func.get()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 433, in get
cuctx = get_context()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 212, in get_context
return _runtime.get_or_create_context(devnum)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 138, in get_or_create_context
return self._get_or_create_context_uncached(devnum)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 151, in _get_or_create_context_uncached
with driver.get_active_context() as ac:
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 393, in enter
driver.cuCtxGetCurrent(byref(hctx))
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 280, in getattr
self.initialize()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 240, in initialize
raise CudaSupportError("Error at driver init: \n%s:" % e)
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init:
[999] Call to cuInit results in CUDA_ERROR_UNKNOWN:
I have completed the setup mentioned in readme.
Downloaded mentioned Kitti datasets and placed them as per the directory structure specified.
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
placed this KITTI_DATASET_ROOT inside nutonomy_pointpillars/second/data/ImageSets/
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0
CUDA Information when $ numba -s command is run: CUDA Information
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None
Traceback (most recent call last):
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms_cpu.py", line 11, in
from second.core.non_max_suppression.nms import (
ImportError: /media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms.so: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "create_data.py", line 12, in
from second.core import box_np_ops
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/box_np_ops.py", line 7, in
from second.core.non_max_suppression.nms_gpu import rotate_iou_gpu_eval
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/init.py", line 1, in
from second.core.non_max_suppression.nms_cpu import nms_jit, soft_nms_jit
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms_cpu.py", line 19, in
cuda=True)
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/utils/buildtools/pybind11_build.py", line 96, in load_pb11
cmds.append(Nvcc(s, out(s), arch))
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/utils/buildtools/command.py", line 123, in init
raise ValueError("you must specify arch if use cuda.") ValueError: you must specify arch if use cuda.
I met this problems when i run create_data.py on Xavier,Can u help me solve this problems?
and When I've tried training, error occurred as below.
middle_class_name PointPillarsScatter
num_trainable parameters: 74
{'Cyclist': 5, 'Pedestrian': 5, 'Car': 5}
[-1]
load 14357 Car database infos
load 2207 Pedestrian database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 56 Person_sitting database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
After filter database:
load 10520 Car database infos
load 2066 Pedestrian database infos
load 580 Cyclist database infos
load 826 Van database infos
load 53 Person_sitting database infos
load 321 Truck database infos
load 199 Tram database infos
load 259 Misc database infos
remain number of infos: 3712
remain number of infos: 3769
WORKER 0 seed: 1592207469
WORKER 1 seed: 1592207470
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: Invalid use of type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>)) with parameters (array(float64, 4d, A))
parameterized
[1] During: resolving callee type: type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>))
[2] During: typing of call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (118)
File "core/geometry.py", line 118:
def points_in_convex_polygon_3d_jit(points,
@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: Invalid use of type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>)) with parameters (array(float64, 4d, A))
parameterized
[1] During: resolving callee type: type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>))
[2] During: typing of call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (118)
File "core/geometry.py", line 118:
def points_in_convex_polygon_3d_jit(points,
@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>
File "core/geometry.py", line 123:
def points_in_convex_polygon_3d_jit(points,
@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>
File "core/geometry.py", line 123:
def points_in_convex_polygon_3d_jit(points,
@numba.jit(nopython=False)
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_3d_jit" was compiled in object mode without forceobj=True, but has lifted loops.
File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.
File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_3d_jit" was compiled in object mode without forceobj=True, but has lifted loops.
File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.
File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,
state.func_ir.loc))
/home/moon/nutonomy_pointpillars/second/core/preprocess.py:472: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j]
/home/moon/nutonomy_pointpillars/second/core/preprocess.py:472: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j]
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/npydecl.py:958: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
warnings.warn(NumbaPerformanceWarning(msg))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/npydecl.py:958: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
warnings.warn(NumbaPerformanceWarning(msg))
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Invalid use of Function() with argument(s) of type(s): (array(float32, 3d, C), Tuple(slice<a:b>, list(int64), slice<a:b>))
parameterized
In definition 0:
All templates rejected with literals.
In definition 1:
All templates rejected without literals.
In definition 2:
All templates rejected with literals.
In definition 3:
All templates rejected without literals.
In definition 4:
All templates rejected with literals.
In definition 5:
All templates rejected without literals.
In definition 6:
All templates rejected with literals.
In definition 7:
All templates rejected without literals.
In definition 8:
All templates rejected with literals.
In definition 9:
All templates rejected without literals.
In definition 10:
All templates rejected with literals.
In definition 11:
All templates rejected without literals.
In definition 12:
TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
In definition 13:
TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (153)
File "core/geometry.py", line 153:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Invalid use of Function() with argument(s) of type(s): (array(float32, 3d, C), Tuple(slice<a:b>, list(int64), slice<a:b>))
parameterized
In definition 0:
All templates rejected with literals.
In definition 1:
All templates rejected without literals.
In definition 2:
All templates rejected with literals.
In definition 3:
All templates rejected without literals.
In definition 4:
All templates rejected with literals.
In definition 5:
All templates rejected without literals.
In definition 6:
All templates rejected with literals.
In definition 7:
All templates rejected without literals.
In definition 8:
All templates rejected with literals.
In definition 9:
All templates rejected without literals.
In definition 10:
All templates rejected with literals.
In definition 11:
All templates rejected without literals.
In definition 12:
TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
In definition 13:
TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (153)
File "core/geometry.py", line 153:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>
File "core/geometry.py", line 161:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>
File "core/geometry.py", line 161:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
@numba.jit
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.
File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.
File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.
File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.
File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
state.func_ir.loc))
Traceback (most recent call last):
File "pytorch/train.py", line 894, in
fire.Fire()
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.name)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "pytorch/train.py", line 423, in train
raise e
File "pytorch/train.py", line 323, in train
ret_dict = net(input)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/moon/nutonomy_pointpillars/second/pytorch/models/voxelnet.py", line 701, in forward
preds_dict = self.rpn(spatial_features)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/moon/nutonomy_pointpillars/second/pytorch/models/voxelnet.py", line 472, in forward
x = torch.cat([up1, up2, up3], dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 252 and 250 in dimension 2 at /opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/generic/THCTensorMath.cu:71
How should I fix proto file to multi class training?
Is anyone success training with multi class?
Best regards.
The following is what i met when training the model, does anyone else have the same error?
/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "./pytorch/train.py", line 882, in
fire.Fire()
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 126, in train
text_format.Merge(proto_str, config)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in Merge
allow_unknown_field=allow_unknown_field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 803, in MergeLines
return parser.MergeLines(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 828, in MergeLines
self._ParseOrMerge(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 850, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.
For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "./pytorch/train.py", line 882, in
fire.Fire()
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 126, in train
text_format.Merge(proto_str, config)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in Merge
allow_unknown_field=allow_unknown_field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 803, in MergeLines
return parser.MergeLines(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 828, in MergeLines
self._ParseOrMerge(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 850, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 947, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 151:9 : Message type "second.protos.LearningRate" has no field named "exponential_decay_learning_rate".
When training the model, I use the pandaset dataset, and I change the point_cloud_range from X∈[-51.2, 51.2] to X∈[-100, 100], then results show that it is hard to converge, even predict in the training set, only a part of the object will be recalled. And I tried to train the model in a very small training set(only 10 samples), and there is no bbox prediction in training set. So why? Maybe the cls_loss (focal_loss)funciton's parameters should be adjusted?
it seems second.core.non_max_suppression, point_cloud, cc are compiled under python3.6, and cannot be imported in python 3.7. Any tips on how to recompile them? thanks
Hi, I followed your instructions step by step. After successfully created the data, I started training xyre_16.proto. However there is some error as follows:
File "/root/train_pointpillars/nutonomy_pointpillars/second/pytorch/train.py", line 312, in train
assert 10 == len(ret_dict), "something write with training output size!"
AssertionError: something write with training output size!
When I run the train.py, the error shows like this:
voxel_features = voxel_features.permute(1, 0)
RuntimeError: number of dims don't match in permute
and
self.padding, self.dilation, self.groups)
RuntimeError: Calculated padded input size per channel: (0 x 100). Kernel size: (1 x 1). Kernel size can't be greater than actual input size
Is there anything wrong with the input or the network?
I want to train pointpillar on a RGB-D dataset(for example data collected by realsense), to do 3D detections on some small objects like cups or boxes, do u think this will also work, or if u have any other tips or suggestions, thanks.
Hi,
I trained a network using the original repository. I'm getting the following error when I use this repository to convert the model to ONNX model. Do you have any idea what could be the reason? I can use my model for inference without any problem but conversion is giving me trouble.
TIA,
RuntimeError: Error(s) in loading state_dict for VoxelNet: Missing key(s) in state_dict: "voxel_feature_extractor.pfn_layers.0.conv1.weight", "voxel_feature_extractor.pfn_layers.0.conv1.bias", "voxel_feature_extractor.pfn_layers.0.conv2.weight", "voxel_feature_extractor.pfn_layers.0.conv2.bias", "voxel_feature_extractor.pfn_layers.0.t_conv.weight", "voxel_feature_extractor.pfn_layers.0.t_conv.bias", "voxel_feature_extractor.pfn_layers.0.conv3.weight", "voxel_feature_extractor.pfn_layers.0.conv3.bias".