Giter Club home page Giter Club logo

nutonomy_pointpillars's Introduction

PointPillars Pytorch Model Convert To ONNX, And Using TensorRT to Load this IR(ONNX) for Fast Speeding Inference

Welcome to PointPillars(This is origin from nuTonomy/second.pytorch ReadMe.txt).

This repo demonstrates how to reproduce the results from PointPillars: Fast Encoders for Object Detection from Point Clouds (to be published at CVPR 2019) on the KITTI dataset by making the minimum required changes from the preexisting open source codebase SECOND.

Meanwhile, This part of the code also refers to the open source k0suke-murakami (https://github.com/k0suke-murakami/train_point_pillars) this code.

This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.

WARNING: This code is not being actively maintained. This code can be used to reproduce the results in the first version of the paper, https://arxiv.org/abs/1812.05784v1. For an actively maintained repository that can also reproduce PointPillars results on nuScenes, we recommend using SECOND. We are not the owners of the repository, but we have worked with the author and endorse his code.

Example Results

Getting Started

This is a fork of SECOND for KITTI object detection and the relevant subset of the original README is reproduced here.

Docker Environments

If you do not waste time on pointpillars envs, please pull my docker virtual environments :

docker pull smallmunich/suke_pointpillars:v1 

Attention: when you launch this docker envs, please run this command :

conda activate pointpillars 

And Then, you can run train or evaluation or onnx model generate command line.

Install

1. Clone code

git clone https://github.com/SmallMunich/nutonomy_pointpillars.git

2. Install Python packages

It is recommend to use the Anaconda package manager.

First, use Anaconda to configure as many packages as possible.

conda create -n pointpillars python=3.6 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda

Then use pip for the packages missing from Anaconda.

pip install --upgrade pip
pip install fire tensorboardX

Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this to be correctly configured. However, I suggest you install the spconv instead of SparseConvNet.

git clone [email protected]:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead

Additionally, you may need to install Boost geometry:

sudo apt-get install libboost-all-dev

3. Setup cuda for numba

You need to add following environment variables for numba to ~/.bashrc:

export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice

4. PYTHONPATH

Add nutonomy_pointpillars/ to your PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:/your_root_path/nutonomy_pointpillars/

Prepare dataset

1. Dataset preparation

Download KITTI dataset and create some directories first:

└── KITTI_DATASET_ROOT
       ├── training    <-- 7481 train data
       |   ├── image_2 <-- for visualization
       |   ├── calib
       |   ├── label_2
       |   ├── velodyne
       |   └── velodyne_reduced <-- empty directory
       └── testing     <-- 7580 test data
           ├── image_2 <-- for visualization
           ├── calib
           ├── velodyne
           └── velodyne_reduced <-- empty directory

Note: PointPillar's protos use KITTI_DATASET_ROOT=/data/sets/kitti_second/.

2. Create kitti infos:

python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT

3. Create reduced point cloud:

python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT

4. Create groundtruth-database infos:

python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT

5. Modify config file

The config file needs to be edited to point to the above datasets:

train_input_reader: {
  ...
  database_sampler {
    database_info_path: "/path/to/kitti_dbinfos_train.pkl"
    ...
  }
  kitti_info_path: "/path/to/kitti_infos_train.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
  ...
  kitti_info_path: "/path/to/kitti_infos_val.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}

Train

cd ~/second.pytorch/second
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
  • If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
  • Training only supports a single GPU.
  • Training uses a batchsize=2 which should fit in memory on most standard GPUs.
  • On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.

Evaluate

cd ~/second.pytorch/second/
python pytorch/train.py evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • Detection result will saved in model_dir/eval_results/step_xxx.
  • By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.

ONNX IR Generate

pointpillars pytorch model convert to IR onnx, you should verify some code as follows:

this python file is : second/pyotrch/models/voxelnet.py

        voxel_features = self.voxel_feature_extractor(pillar_x, pillar_y, pillar_z, pillar_i,
                                                      num_points, x_sub_shaped, y_sub_shaped, mask)

        ###################################################################################
        # return voxel_features ### onnx voxel_features export
        # middle_feature_extractor for trim shape
        voxel_features = voxel_features.squeeze()
        voxel_features = voxel_features.permute(1, 0)

UNCOMMENT this line: return voxel_features

And Then, you can run convert IR command.

cd ~/second.pytorch/second/
python pytorch/train.py onnx_model_generate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir

Compare ONNX model With Pytorch Origin model predicts

  • If you want to check this convert model about pfe.onnx and rpn.onnx model, please refer to this py-file: check_onnx_valid.py

  • Now, we can compare onnx results with pytorch origin model predicts as follows :

  • the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.

    eval_voxel_features.txt 
    eval_voxel_features_onnx.txt 
    eval_rpn_features.txt 
    eval_rpn_onnx_features.txt 
  • pfe.onnx model compare with origin pfe-layer : Example Results

  • rpn.onnx model compare with origin rpn-layer : Example Results

Compare ONNX with TensorRT Fast Speed Inference

  • First you needs this environments(onnx_tensorrt envs):
      docker pull smallmunich/onnx_tensorrt:latest
  • If you want to use pfe.onnx and rpn.onnx model for tensorrt inference, please refer to this py-file: tensorrt_onnx_infer.py

  • Now, we can compare onnx results with pytorch origin model predicts as follows :

  • the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.

    pfe_rpn_onnx_outputs.txt 
    pfe_tensorrt_outputs.txt 
    rpn_onnx_outputs.txt 
    rpn_tensorrt_outputs.txt 
  • pfe.onnx model compare with tensorrt pfe-layer : Example Results

  • rpn.onnx model compare with tensorrt rpn-layer : Example Results

Blog Address

nutonomy_pointpillars's People

Contributors

smallmunich avatar zeyawu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

nutonomy_pointpillars's Issues

I dont get the input and output shapes of pfe.onnx and rpn.onnx

Hi, i have a stupid question.
Can anyone explain the input shapes to the pfe.onnx an rpn.onnx. I have difficulty understanding how to apply it for my point clouds. So what i see is that i somehow need to create pillars and shape x,y,z to individual [1, 1, 12000, 100], and i dont see how it corresponds to the decsribed tensor (D, P,N) with 9 features D, pillars P and points per pillar N. Furthermore, what is the output of pfe.onnx and and most importantly the input and output of rpn.onnx. Just not managing to relate the given neural network shapes to the expected bounding boxes.

thanks

docker pull failed

sudo docker pull smallmunich/suke_pointpillars:v0

Error response from daemon: manifest for smallmunich/suke_pointpillars:v0 not found: manifest unknown: manifest unknown

environment:
ubuntu 16.04
Docker version 19.03.7

PointPillars NuScenes Model Conversion

Thank you for your code!

I am currently working on converting trained PointPillars NuScenes Model (SECOND lastest code) to ONNX. However, there is error when onnx_module_generate:

size mismatch for rpn.conv_cls.weight: copying a param with shape torch.Size([200, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for rpn.conv_cls.bias: copying a param with shape torch.Size([200]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for rpn.conv_box.weight: copying a param with shape torch.Size([140, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
size mismatch for rpn.conv_box.bias: copying a param with shape torch.Size([140]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for rpn.conv_dir_cls.weight: copying a param with shape torch.Size([40, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 384, 1, 1]).
size mismatch for rpn.conv_dir_cls.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([4]).

Any suggestions or overall guideline would be appricated!

转成tensorrt,速度降低了五倍

我按照您的方法将模型转成了tensorrt,速度降低了将近五倍,但是预测结果的小数前三位都相同。
这是我的环境配置

Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.14.6
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0
GPU models and configuration: GPU 0: GeForce RTX 2060
Nvidia driver version: 430.50
cuDNN version: 7.6.3
tensorrt version: TensorRT-6.0.1.5
onnxruntime version: 0.5.0
onnx version:1.5.0
protobuf version: 3.9.1
Versions of relevant libraries:
[pip] numpy==1.16.5
[pip] torch==1.2.0
[pip] torchvision==0.4.0a0+6b959ee
[conda] Could not collect

[999] Call to cuInit results in CUDA_ERROR_UNKNOWN: when create_data.py is run

Facing below error

$ python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 237, in initialize
self.cuInit(0)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 300, in safe_cuda_api_call
self._check_error(fname, retcode)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 335, in _check_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [999] Call to cuInit results in CUDA_ERROR_UNKNOWN

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "create_data.py", line 9, in
from second.core import box_np_ops
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/box_np_ops.py", line 7, in
from second.core.non_max_suppression.nms_gpu import rotate_iou_gpu_eval
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/non_max_suppression/init.py", line 2, in
from second.core.non_max_suppression.nms_gpu import (nms_gpu, rotate_iou_gpu,
File "/mnt/d/Downloads/POINTPILLARS/nutonomy_pointpillars/second/core/non_max_suppression/nms_gpu.py", line 36, in
@cuda.jit('(int64, float32, float32[:, :], uint64[:])')
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/decorators.py", line 95, in kernel_jit
return Dispatcher(func, [func_or_sig], targetoptions=targetoptions)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 899, in init
self.compile(sigs[0])
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 1102, in compile
kernel.bind()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 590, in bind
self._func.get()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/compiler.py", line 433, in get
cuctx = get_context()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 212, in get_context
return _runtime.get_or_create_context(devnum)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 138, in get_or_create_context
return self._get_or_create_context_uncached(devnum)
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/devices.py", line 151, in _get_or_create_context_uncached
with driver.get_active_context() as ac:
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 393, in enter
driver.cuCtxGetCurrent(byref(hctx))
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 280, in getattr
self.initialize()
File "/home/mohsin/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/cudadrv/driver.py", line 240, in initialize
raise CudaSupportError("Error at driver init: \n%s:" % e)
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init:
[999] Call to cuInit results in CUDA_ERROR_UNKNOWN:

I have completed the setup mentioned in readme.
Downloaded mentioned Kitti datasets and placed them as per the directory structure specified.
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
placed this KITTI_DATASET_ROOT inside nutonomy_pointpillars/second/data/ImageSets/

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0

CUDA Information when $ numba -s command is run:
CUDA Information
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None

I am trying this in WSL.

run create_data.py error

Traceback (most recent call last):
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms_cpu.py", line 11, in
from second.core.non_max_suppression.nms import (
ImportError: /media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms.so: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "create_data.py", line 12, in
from second.core import box_np_ops
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/box_np_ops.py", line 7, in
from second.core.non_max_suppression.nms_gpu import rotate_iou_gpu_eval
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/init.py", line 1, in
from second.core.non_max_suppression.nms_cpu import nms_jit, soft_nms_jit
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/core/non_max_suppression/nms_cpu.py", line 19, in
cuda=True)
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/utils/buildtools/pybind11_build.py", line 96, in load_pb11
cmds.append(Nvcc(s, out(s), arch))
File "/media/buaa/MyPassport/Deeplidar/nutonomy_pointpillars/second/utils/buildtools/command.py", line 123, in init
raise ValueError("you must specify arch if use cuda.")
ValueError: you must specify arch if use cuda.

I met this problems when i run create_data.py on Xavier,Can u help me solve this problems?

confused about Dilated convolution replace maxpooling

hello, I'm confused about this change in class PillarFeatureNet

self.t_conv = nn.ConvTranspose2d(100, 1, (1,8), stride=(1,7))
self.conv3 = nn.Conv2d(64, 64, kernel_size=(1, 34), stride=(1, 1), dilation=(1,3))
def forward(self, input):
x = self.conv1(input)
x = self.norm(x)
x = F.relu(x)
x = self.conv3(x)
return x
# x = self.linear(input)
# x = self.norm(x.permute(0, 2, 1).contiguous()).permute(0, 2, 1).contiguous()
# x = F.relu(x)
#
# x_max = torch.max(x, dim=1, keepdim=True)[0]
#
# if self.last_vfe:
# return x_max
# else:
# x_repeat = x_max.repeat(1, inputs.shape[1], 1)
# x_concatenated = torch.cat([x, x_repeat], dim=2)
# return x_concatenated

use conv replace maxpooling op
although the shape are same, can this change equivalent? do you know some reference material about this

I have trouble with multi class training

Hello, I wanna training multi class(car, pedestrian, cyclist).
so, I've modified proto file in config as below.

model: {
second: {
voxel_generator {
point_cloud_range : [0, -40, -3, 70, 40, 1]
voxel_size : [0.16, 0.16, 4]
max_number_of_points_per_voxel : 100
}
num_class: 3
voxel_feature_extractor: {
module_class_name: "PillarFeatureNet"
num_filters: [64]
with_distance: false
}
middle_feature_extractor: {
module_class_name: "PointPillarsScatter"
}
rpn: {
module_class_name: "RPN"
layer_nums: [3, 5, 5]
layer_strides: [2, 2, 2]
num_filters: [64, 128, 256]
upsample_strides: [1, 2, 4]
num_upsample_filters: [128, 128, 128]
use_groupnorm: false
num_groups: 32
}
loss: {
classification_loss: {
weighted_sigmoid_focal: {
alpha: 0.25
gamma: 2.0
anchorwise_output: true
}
}
localization_loss: {
weighted_smooth_l1: {
sigma: 3.0
code_weight: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
}
}
classification_weight: 1.0
localization_weight: 2.0
}
# Outputs
use_sigmoid_score: true
encode_background_as_zeros: true
encode_rad_error_by_sin: true
use_direction_classifier: true
direction_loss_weight: 0.2
use_aux_classifier: false
# Loss
pos_class_weight: 1.0
neg_class_weight: 1.0
loss_norm_type: NormByNumPositives
# Postprocess
post_center_limit_range: [0, -40, -5, 70, 40, 5]
use_rotate_nms: false
use_multi_class_nms: false
nms_pre_max_size: 1000
nms_post_max_size: 300
nms_score_threshold: 0.05
nms_iou_threshold: 0.5
use_bev: false
num_point_features: 4
without_reflectivity: false
box_coder: {
ground_box3d_coder: {
linear_dim: false
encode_angle_vector: false
}
}
target_assigner: {
anchor_generators: {
anchor_generator_stride: {
sizes: [1.6, 3.9, 1.56] # wlh
strides: [0.16, 0.16, 0.0] # if generate only 1 z_center, z_stride will be ignored
offsets: [0.8, -39.2, -1.78] # origin_offset + strides / 2
rotations: [0, 1.57] # 0, pi/2
matched_threshold : 0.6
unmatched_threshold : 0.45
}
}
anchor_generators: {
anchor_generator_stride: {
sizes: [0.6, 1.76, 1.73] # wlh
strides: [0.16, 0.16, 0.0] # if generate only 1 z_center, z_stride will be ignored
offsets: [0.08, -39.2, -1.78] # origin_offset + strides / 2
rotations: [0, 1.57] # 0, pi/2
matched_threshold : 0.5
unmatched_threshold : 0.35
}
}
anchor_generators: {
anchor_generator_stride: {
sizes: [0.6, 0.8, 1.73] # wlh
strides: [0.16, 0.16, 0.0] # if generate only 1 z_center, z_stride will be ignored
offsets: [0.08, -39.2, -1.78] # origin_offset + strides / 2
rotations: [0, 1.57] # 0, pi/2
matched_threshold : 0.5
unmatched_threshold : 0.35
}
}
sample_positive_fraction : -1
sample_size : 512
region_similarity_calculator: {
nearest_iou_similarity: {
}
}
}
}
}

train_input_reader: {
#record_file_path: "/data/sets/kitti_second/kitti_train.tfrecord"
class_names: ["Car", "Cyclist", "Pedestrian"]
max_num_epochs : 160
batch_size: 2
prefetch_size : 25
max_number_of_voxels: 12000
shuffle_points: true
num_workers: 2
groundtruth_localization_noise_std: [0.25, 0.25, 0.25]
groundtruth_rotation_uniform_noise: [-0.15707963267, 0.15707963267]
global_rotation_uniform_noise: [-0.78539816, 0.78539816]
global_scaling_uniform_noise: [0.95, 1.05]
global_random_rotation_range_per_object: [0, 0]
anchor_area_threshold: 1
remove_points_after_sample: false
groundtruth_points_drop_percentage: 0.0
groundtruth_drop_max_keep_points: 15
database_sampler {
database_info_path: "/home/moon/nutonomy_pointpillars/second/data/sets/kitti_second/kitti_dbinfos_train.pkl"
sample_groups {
name_to_max_num {
key: "Car"
value: 15
}
}
sample_groups {
name_to_max_num {
key: "Pedestrian"
value: 8
}
}
sample_groups {
name_to_max_num {
key: "Cyclist"
value: 8
}
}
database_prep_steps {
filter_by_min_num_points {
min_num_point_pairs {
key: "Car"
value: 5
}
min_num_point_pairs {
key: "Pedestrian"
value: 5
}
min_num_point_pairs {
key: "Cyclist"
value: 5
}
}
}
database_prep_steps {
filter_by_difficulty {
removed_difficulties: [-1]
}
}
global_random_rotation_range_per_object: [0, 0]
rate: 1.0
}

remove_unknown_examples: false
remove_environment: false
kitti_info_path: "/home/moon/nutonomy_pointpillars/second/data/sets/kitti_second/kitti_infos_train.pkl"
kitti_root_path: "/home/moon/nutonomy_pointpillars/second/data/sets/kitti_second"
}

train_config: {
optimizer: {
adam_optimizer: {
learning_rate: {
exponential_decay_learning_rate: {
initial_learning_rate: 0.0002
decay_steps: 27840 # 1856 steps per epoch * 15 epochs
decay_factor: 0.8
staircase: true
}
}
weight_decay: 0.0001
}
use_moving_average: false

}
inter_op_parallelism_threads: 4
intra_op_parallelism_threads: 4
steps: 296960 # 1856 steps per epoch * 160 epochs
steps_per_eval: 9280 # 1856 steps per epoch * 5 epochs
save_checkpoints_secs : 1800 # half hour
save_summary_steps : 10
enable_mixed_precision: false
loss_scale_factor : 512.0
clear_metrics_every_epoch: false
}

eval_input_reader: {
#record_file_path: "/data/sets/kitti_second/kitti_val.tfrecord"
class_names: ["Car", "Cyclist", "Pedestrian"]
batch_size: 2
max_num_epochs : 160
prefetch_size : 25
max_number_of_voxels: 12000
shuffle_points: false
num_workers: 3
anchor_area_threshold: 1
remove_environment: false
kitti_info_path: "/home/moon/nutonomy_pointpillars/second/data/sets/kitti_second/kitti_infos_val.pkl"
kitti_root_path: "/home/moon/nutonomy_pointpillars/second/data/sets/kitti_second"
}

and When I've tried training, error occurred as below.

middle_class_name PointPillarsScatter
num_trainable parameters: 74
{'Cyclist': 5, 'Pedestrian': 5, 'Car': 5}
[-1]
load 14357 Car database infos
load 2207 Pedestrian database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 56 Person_sitting database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
After filter database:
load 10520 Car database infos
load 2066 Pedestrian database infos
load 580 Cyclist database infos
load 826 Van database infos
load 53 Person_sitting database infos
load 321 Truck database infos
load 199 Tram database infos
load 259 Misc database infos
remain number of infos: 3712
remain number of infos: 3769
WORKER 0 seed: 1592207469
WORKER 1 seed: 1592207470
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: Invalid use of type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>)) with parameters (array(float64, 4d, A))

  • parameterized
    [1] During: resolving callee type: type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>))
    [2] During: typing of call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (118)

File "core/geometry.py", line 118:
def points_in_convex_polygon_3d_jit(points,

num_surfaces = np.full((num_polygons,), 9999999, dtype=np.int64)
normal_vec, d = surface_equ_3d_jit(polygon_surfaces[:, :, :3, :])
^

@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: Invalid use of type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>)) with parameters (array(float64, 4d, A))

  • parameterized
    [1] During: resolving callee type: type(CPUDispatcher(<function surface_equ_3d_jit at 0x7f6203c33158>))
    [2] During: typing of call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (118)

File "core/geometry.py", line 118:
def points_in_convex_polygon_3d_jit(points,

num_surfaces = np.full((num_polygons,), 9999999, dtype=np.int64)
normal_vec, d = surface_equ_3d_jit(polygon_surfaces[:, :, :3, :])
^

@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "core/geometry.py", line 123:
def points_in_convex_polygon_3d_jit(points,

sign = 0.0
for i in range(num_points):
^

@numba.jit(nopython=False)
/home/moon/nutonomy_pointpillars/second/core/geometry.py:97: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_3d_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "core/geometry.py", line 123:
def points_in_convex_polygon_3d_jit(points,

sign = 0.0
for i in range(num_points):
^

@numba.jit(nopython=False)
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_3d_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,

"""
max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,

"""
max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_3d_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,

"""
max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 113:
def points_in_convex_polygon_3d_jit(points,

"""
max_num_surfaces, max_num_points_of_surface = polygon_surfaces.shape[1:3]
^

state.func_ir.loc))
/home/moon/nutonomy_pointpillars/second/core/preprocess.py:472: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j]
/home/moon/nutonomy_pointpillars/second/core/preprocess.py:472: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
points[i:i + 1, :3] = points[i:i + 1, :3] @ rot_mat_T[j]
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/npydecl.py:958: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
warnings.warn(NumbaPerformanceWarning(msg))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/npydecl.py:958: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, C))
warnings.warn(NumbaPerformanceWarning(msg))
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Invalid use of Function() with argument(s) of type(s): (array(float32, 3d, C), Tuple(slice<a:b>, list(int64), slice<a:b>))

  • parameterized
    In definition 0:
    All templates rejected with literals.
    In definition 1:
    All templates rejected without literals.
    In definition 2:
    All templates rejected with literals.
    In definition 3:
    All templates rejected without literals.
    In definition 4:
    All templates rejected with literals.
    In definition 5:
    All templates rejected without literals.
    In definition 6:
    All templates rejected with literals.
    In definition 7:
    All templates rejected without literals.
    In definition 8:
    All templates rejected with literals.
    In definition 9:
    All templates rejected without literals.
    In definition 10:
    All templates rejected with literals.
    In definition 11:
    All templates rejected without literals.
    In definition 12:
    TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
    raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
    In definition 13:
    TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
    raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
    This error is usually caused by passing an argument of a type that is unsupported by the named function.
    [1] During: typing of intrinsic-call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (153)

File "core/geometry.py", line 153:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Invalid use of Function() with argument(s) of type(s): (array(float32, 3d, C), Tuple(slice<a:b>, list(int64), slice<a:b>))

  • parameterized
    In definition 0:
    All templates rejected with literals.
    In definition 1:
    All templates rejected without literals.
    In definition 2:
    All templates rejected with literals.
    In definition 3:
    All templates rejected without literals.
    In definition 4:
    All templates rejected with literals.
    In definition 5:
    All templates rejected without literals.
    In definition 6:
    All templates rejected with literals.
    In definition 7:
    All templates rejected without literals.
    In definition 8:
    All templates rejected with literals.
    In definition 9:
    All templates rejected without literals.
    In definition 10:
    All templates rejected with literals.
    In definition 11:
    All templates rejected without literals.
    In definition 12:
    TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
    raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
    In definition 13:
    TypeError: unsupported array index type list(int64) in Tuple(slice<a:b>, list(int64), slice<a:b>)
    raised from /home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/typing/arraydecl.py:71
    This error is usually caused by passing an argument of a type that is unsupported by the named function.
    [1] During: typing of intrinsic-call at /home/moon/nutonomy_pointpillars/second/core/geometry.py (153)

File "core/geometry.py", line 153:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "core/geometry.py", line 161:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/home/moon/nutonomy_pointpillars/second/core/geometry.py:137: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "core/geometry.py", line 161:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:178: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 148:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
Traceback (most recent call last):
File "pytorch/train.py", line 894, in
fire.Fire()
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.name)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "pytorch/train.py", line 423, in train
raise e
File "pytorch/train.py", line 323, in train
ret_dict = net(input)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/moon/nutonomy_pointpillars/second/pytorch/models/voxelnet.py", line 701, in forward
preds_dict = self.rpn(spatial_features)
File "/home/moon/anaconda3/envs/pointpillars/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/moon/nutonomy_pointpillars/second/pytorch/models/voxelnet.py", line 472, in forward
x = torch.cat([up1, up2, up3], dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 252 and 250 in dimension 2 at /opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/generic/THCTensorMath.cu:71

How should I fix proto file to multi class training?
Is anyone success training with multi class?
Best regards.

train error

The following is what i met when training the model, does anyone else have the same error?

/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "./pytorch/train.py", line 882, in
fire.Fire()
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 126, in train
text_format.Merge(proto_str, config)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in Merge
allow_unknown_field=allow_unknown_field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 803, in MergeLines
return parser.MergeLines(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 828, in MergeLines
self._ParseOrMerge(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 850, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice.

For more information about alternatives visit: ('https://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Traceback (most recent call last):
File "./pytorch/train.py", line 882, in
fire.Fire()
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 126, in train
text_format.Merge(proto_str, config)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 735, in Merge
allow_unknown_field=allow_unknown_field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 803, in MergeLines
return parser.MergeLines(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 828, in MergeLines
self._ParseOrMerge(lines, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 850, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 980, in _MergeField
merger(tokenizer, message, field)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1055, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/root/anaconda3/envs/pointpillars/lib/python3.6/site-packages/google/protobuf/text_format.py", line 947, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 151:9 : Message type "second.protos.LearningRate" has no field named "exponential_decay_learning_rate".

Change the point_cloud_range, can not converge

When training the model, I use the pandaset dataset, and I change the point_cloud_range from X∈[-51.2, 51.2] to X∈[-100, 100], then results show that it is hard to converge, even predict in the training set, only a part of the object will be recalled. And I tried to train the model in a very small training set(only 10 samples), and there is no bbox prediction in training set. So why? Maybe the cls_loss (focal_loss)funciton's parameters should be adjusted?

AssertionError: something write with training output size!

Hi, I followed your instructions step by step. After successfully created the data, I started training xyre_16.proto. However there is some error as follows:
File "/root/train_pointpillars/nutonomy_pointpillars/second/pytorch/train.py", line 312, in train
assert 10 == len(ret_dict), "something write with training output size!"
AssertionError: something write with training output size!

Any suggectsion would be helpful! Thank you!

Error in training on my own datasets and kitti datasets

When I run the train.py, the error shows like this:
voxel_features = voxel_features.permute(1, 0)
RuntimeError: number of dims don't match in permute
and
self.padding, self.dilation, self.groups)
RuntimeError: Calculated padded input size per channel: (0 x 100). Kernel size: (1 x 1). Kernel size can't be greater than actual input size

Is there anything wrong with the input or the network?

ONXX format

Hello, and thanks a lot for your work, it was really helpful !

I can't manage to export the ONXX file in a version that is suitable for my use.
I need ONXX v3 and your program is exporting v4.

Do you know how I could change this ? Thanks ! :)

train on my own RGB-D dataset

I want to train pointpillar on a RGB-D dataset(for example data collected by realsense), to do 3D detections on some small objects like cups or boxes, do u think this will also work, or if u have any other tips or suggestions, thanks.

VoxelNet state_dict error

Hi,
I trained a network using the original repository. I'm getting the following error when I use this repository to convert the model to ONNX model. Do you have any idea what could be the reason? I can use my model for inference without any problem but conversion is giving me trouble.
TIA,

RuntimeError: Error(s) in loading state_dict for VoxelNet: Missing key(s) in state_dict: "voxel_feature_extractor.pfn_layers.0.conv1.weight", "voxel_feature_extractor.pfn_layers.0.conv1.bias", "voxel_feature_extractor.pfn_layers.0.conv2.weight", "voxel_feature_extractor.pfn_layers.0.conv2.bias", "voxel_feature_extractor.pfn_layers.0.t_conv.weight", "voxel_feature_extractor.pfn_layers.0.t_conv.bias", "voxel_feature_extractor.pfn_layers.0.conv3.weight", "voxel_feature_extractor.pfn_layers.0.conv3.bias".

where and which model used

The models I used in second seem to not work,Can you tell the model and related configuration used in the example,thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.