open-mmlab / mmdetection3d Goto Github PK
View Code? Open in Web Editor NEWOpenMMLab's next-generation platform for general 3D object detection.
Home Page: https://mmdetection3d.readthedocs.io/en/latest/
License: Apache License 2.0
OpenMMLab's next-generation platform for general 3D object detection.
Home Page: https://mmdetection3d.readthedocs.io/en/latest/
License: Apache License 2.0
For example I want to use this to detect objects on the toronto-3d data which has point wise labels but not ready for object detection. This data is in the form of ply files. What do i need to do to get objects from this data using this library?
Thanks in advance.
when i try to generate test text file using " --format-only --options 'pklfile_prefix=./second_kitti_results' 'submission_prefix=./second_kitti_results' " , there are always files missing, so the total number of files is less than the official number 7518. Is the text file without prediction results not generated by default?
ObjectSample in data pipeline has option sample_2d
for pasting 2D image patches to the current image. However, DataBaseSampler does not implement this feature.
Do you plan to support this?
Thanks!
First of all, thank you for providing a great framework.
I have a problem during inference through ros using "hv_pointpillars_secfpn_6x8_160e_kitti-3d-car.py", so I would like to ask a question.
My range is currently as follows, but it does not work properly when modifying the value.
How do I write a config to detect a vehicle in 360 degrees?
If I use the default range config, it works fine as shown below, but I can only see 180 degrees.
As far as I can see, the pointpillars model using Nuscense can normally use 360 degrees.
Any help would be greatly appreciated.
i need a docker to packet all code
is there a way to define batch size, in order to be able to run in gpus with lower ram ?
i didn't find anything related tin config files
I also meet the same bugs with #21
File "/home/sl/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 187, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized
python tools/train.py ./configs/pointpillars/hv_pointpillars_fpn_sbn-all_2x8_2x_lyft-3d.py
Dear All,
I am one of the main contributor of TorchPoints3d: https://github.com/nicolas-chaulet/torch-points3d
Your framework has lot of synergies with ours and I was wondering if you would have any interest in collaboration ?
We also have a Slack Channel with lot of experts in pointcloud field and would love for you guys to join in.
My email address is thomas . chaton . ai @ gmail . com. Without the spaces
Best regards,
Thomas Chaton
Hi,
I noticed that the performance of the provided SECOND model is 79.07 Car MAP on the KITTI validation dataset, while the official number in its paper is 76.48. May I ask what's the difference between the implementations, e.g., training schedule or structure?
Best,
Jianyuan
Thanks for your error report and we appreciate it a lot.
Checklist
Describe the bug
3dssd is not working.
Reproduction
bash tools/dist_train.sh configs/3dssd/configs/3dssd/3dssd_kitti-3d-car.py 1
No
Kitti
Environment
sys.platform: linux
Python: 3.7.7 (default, Mar 23 2020, 22:36:06) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GPU 0: GeForce RTX 2080
GCC: gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0a0+82fd1c8
OpenCV: 4.2.0
MMCV: 1.1.2
MMDetection: 2.4.0
MMDetection3D: 0.5.0+84efe00
MMDetection3D Compiler: GCC 7.4
MMDetection3D CUDA Compiler: 10.1
Error traceback
If applicable, paste the error trackback here.
2020-09-15 07:19:30,524 - mmdet - INFO - Start running, host: root@06957c40f73a, work_dir: /opt/project/work_dirs/3dssd_kitti-3d-car
2020-09-15 07:19:30,524 - mmdet - INFO - workflow: [('train', 1)], max: 150 epochs
Traceback (most recent call last):
File "tools/train.py", line 166, in <module>
main()
File "tools/train.py", line 162, in main
meta=meta)
File "/opt/conda/lib/python3.7/site-packages/mmdet/apis/train.py", line 143, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 32, in train
**kwargs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/parallel/distributed.py", line 36, in train_step
output = self.module.train_step(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.7/site-packages/mmdet/models/detectors/base.py", line 234, in train_step
losses = self(**data)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/opt/project/mmdet3d/models/detectors/base.py", line 57, in forward
return self.forward_train(**kwargs)
File "/opt/project/mmdet3d/models/detectors/votenet.py", line 55, in forward_train
x = self.extract_feat(points_cat)
File "/opt/project/mmdet3d/models/detectors/single_stage.py", line 61, in extract_feat
x = self.backbone(points)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/opt/project/mmdet3d/models/backbones/pointnet2_sa_msg.py", line 146, in forward
sa_xyz[i], sa_features[i])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/opt/project/mmdet3d/ops/pointnet_modules/point_sa_module.py", line 162, in forward
new_features = self.groupers[i](points_xyz, new_xyz, features)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/opt/project/mmdet3d/ops/group_points/group_points.py", line 65, in forward
points_xyz, center_xyz)
File "/opt/project/mmdet3d/ops/ball_query/ball_query.py", line 38, in forward
sample_num, center_xyz, xyz, idx)
TypeError: ball_query_wrapper(): incompatible function arguments. The following argument types are supported:
1. (arg0: int, arg1: int, arg2: int, arg3: float, arg4: int, arg5: at::Tensor, arg6: at::Tensor, arg7: at::Tensor) -> int
Invoked with: 4, 16384, 4096, 0, 0.2, 32, tensor([[[ 14.6247, 0.6191, -1.5952],
[ 71.1641, -14.7686, 1.3168],
[ 39.3636, -22.2427, -0.4247],
...,
[ 17.1308, -14.4069, 0.5405],
[ 14.5145, 0.6062, -0.9683],
[ 10.7840, -12.4140, -1.1146]],
[[ 39.8580, 15.7912, 0.3416],
[ 42.9790, -52.6905, 1.7571],
[ 12.7742, -19.7523, -0.4086],
...,
[ 10.0640, 0.1943, -0.6353],
[ 25.5652, -20.7948, -0.4865],
[ 10.2695, -0.3202, -1.6072]],
[[ 2.7051, 5.4570, 0.1379],
[ 64.5348, 15.0966, 1.6975],
[ 32.3205, 19.6773, 1.4677],
...,
[ 29.6612, 14.8161, 0.1343],
[ 32.3370, 18.7930, 1.4533],
[ 12.1497, 9.6993, -0.8875]],
[[ 25.8298, -0.9123, -1.7159],
[ 46.9165, 51.1561, 1.2508],
[ 68.9729, 0.5307, -0.3507],
...,
[ 7.1650, -1.1334, -1.4412],
[ 10.4670, 22.2042, -1.0417],
[ 12.0158, -0.2317, -1.3502]]], device='cuda:0'), tensor([[[ 14.6247, 0.6191, -1.5952],
[ 2.2351, -5.3085, -1.8268],
[ 7.6207, -2.1482, -1.7090],
...,
[ 6.5514, -4.4366, -1.7079],
[ 8.7897, 3.7034, -0.5568],
[ 7.4799, -3.2524, -1.6927]],
[[ 39.8580, 15.7912, 0.3416],
[ 14.7809, -1.8044, -1.6292],
[ 25.9592, -11.7795, -1.5613],
...,
[ 12.2489, 4.9562, -1.4914],
[ 6.7188, 0.1678, -1.7031],
[ 29.8278, -23.2363, -1.6951]],
[[ 2.7051, 5.4570, 0.1379],
[ 5.8451, 3.7107, -1.5497],
[ 10.1879, 6.4211, -1.5263],
...,
[ 8.1591, 4.1461, -1.5110],
[ 4.8407, 6.4939, -0.4685],
[ 6.4560, 1.0064, -1.0010]],
[[ 25.8298, -0.9123, -1.7159],
[ 13.0339, 4.5968, -1.3455],
[ 4.2728, 7.3933, -1.5368],
...,
[ 9.2533, 6.9935, -1.5087],
[ 24.1863, 37.2038, -0.1566],
[ 20.1692, 35.7954, -0.3835]]], device='cuda:0'), tensor([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], device='cuda:0', dtype=torch.int32)
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tools/train.py', '--local_rank=0', 'configs/3dssd/3dssd_kitti-3d-car.py', '--launcher', 'pytorch']' returned non-zero exit status 1.
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
Dear friends, thanks for the nice codebase. Are there scripts for the above questions? thanks
Hi, I would like to know where to download the mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth which in mvx_net config?
Hi,
Thanks for your work. When compiling mmdet3d, I have successfully completed the previous operations (e.g., install mmcv). However, once I try to compile mmdet3d by 'pip install -v -e .', it would encounter an error with regard to SPconv. It looks like nvcc is not correctly included. I have checked that nvcc is activated using 'nvcc -V'.
My environment is: Cuda 10.1, PyTorch 1.5.0, Python 3.7, GCC 5.3. Any suggestion is welcome.
Best,
Jianyuan
hello,when i train pointpollar,everything is ok
however,when i train the models with spconv,such as second,mvxnet,parta2,i get an error:
......
File "/work/mmdet3d/ops/spconv/modules.py", line 130, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/work/mmdet3d/ops/spconv/conv.py", line 168, in forward
grid=input.grid)
File "/work/mmdet3d/ops/spconv/ops.py", line 94, in get_indice_pairs
int(transpose))
RuntimeError: mmdet3d/ops/spconv/src/indice_cuda.cu 124
cuda execution failed with error 2
could you help me with this problem?thanks very much!!!
Environment
Ubuntu16.04、TITAN V 、driver 418.56、cuda 10.1、cudnn 7
python 3.6、torch 1.4.0、torchvision 0.5.0、mmcv-full 1.0.5
Describe the bug
I follow the install.md and run pip install -v -e .
without errors.
However, when I train those models which use spconv
, I get an error. It seems like there is something wrong with my spconv
.
Reproduction
Environment
python mmdet/utils/collect_env.py
to collect necessary environment infomation and paste it here.TorchVision: 0.6.0a0+82fd1c8
OpenCV: 4.3.0
MMCV: 1.0.2
MMDetection: 2.3.0rc0+af33f11
MMDetection Compiler: GCC 7.3
MMDetection CUDA Compiler: 10.2`
Error traceback
File "/home/xx/anaconda3/envs/jio2pt1.5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
input = module(input)
File "/home/xx/anaconda3/envs/jio2pt1.5/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/conv.py", line 168, in forward
result = self.forward(*input, **kwargs)
result = self.forward(*input, **kwargs)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/conv.py", line 168, in forward
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/conv.py", line 168, in forward
result = self.forward(*input, **kwargs)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/conv.py", line 168, in forward
grid=input.grid)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/ops.py", line 94, in get_indice_pairs
grid=input.grid)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/ops.py", line 94, in get_indice_pairs
grid=input.grid)
grid=input.grid)
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/ops.py", line 94, in get_indice_pairs
File "/home/xx/fei2_workspace/mmdetection3d/tools/../mmdet3d/ops/spconv/ops.py", line 94, in get_indice_pairs
int(transpose))
int(transpose))
RuntimeError: /home/xx/fei2_workspace/mmdetection3d/mmdet3d/ops/spconv/src/indice_cuda.cu 124
cuda execution failed with error 2
int(transpose))
RuntimeError: /home/xx/fei2_workspace/mmdetection3d/mmdet3d/ops/spconv/src/indice_cuda.cu 124
cuda execution failed with error 2
RuntimeError: /home/xx/fei2_workspace/mmdetection3d/mmdet3d/ops/spconv/src/indice_cuda.cu 124
cuda execution failed with error 2
int(transpose))
RuntimeError: /home/xx/fei2_workspace/mmdetection3d/mmdet3d/ops/spconv/src/indice_cuda.cu 124
cuda execution failed with error 2
Hi, author.
I confuse that when I use a multi-mudal 3D detector, how the mmdetection3d can allocate the data to corresponding architecture, i.e RGB image to 2D bacbone, and points to 3D backbone? What part of code can I refer to?
BTW, could you consider to introduce the implementation of pseudo signals, i.e pseudo LiDAR? I think, the idea of fusing multiple modalities would be great, and the format of data modality can also be various.
Thanks in advance for your help.
Hi,
I would like to finetune a model on my dataset. The tutorials to do so seems to be for 2D images.
Are you planning to include also those for 3D?
Thanks
The MVXNet fuses multi-modal data based on VoxelNet ,and the 3D CNN in VoxelNet is ordinary 3D CONV not the sparseconv,so your implements of the MVXNet uses the sparseconv which is proposed by the Second, could this be Not in accord with the meaning of the original paper?Thank you for your reply
Hi, nice work!
And I am wondering about what is the differences between mmdetection3d and openpcdet? they are both about pointcloud detection and both in open-mmlab?
Thanks for you work, im trying to run pcd_demo.py
exactly as described in getting started, using KITTI config and a latest SECOND-KITTI-3d-Car checkpoint that i got here
i run it using the following shell script:
python demo/pcd_demo.py demo/kitti_000008.bin \
configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py \
checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
Result:
Traceback (most recent call last):
File "demo/pcd_demo.py", line 28, in <module>
main()
File "demo/pcd_demo.py", line 24, in main
show_result_meshlab(data, result, args.out_dir)
File "/home/msa/Documents/repos/mmdetection3d/mmdetection3d/mmdet3d/apis/inference.py", line 102, in show_result_meshlab
pred_bboxes = result['boxes_3d'].tensor.numpy()
TypeError: list indices must be integers or slices, not str
I also get exact same results if i try run demo with pointpillars-3d-car config / chekcpoint.
Environment
python mmdet3d/utils/collect_env.py
output:sys.platform: linux
Python: 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0]
CUDA available: True
GPU 0: Quadro M2200
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GCC: gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
PyTorch: 1.6.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.7.0
OpenCV: 4.4.0
MMCV: 1.1.2
MMCV Compiler: GCC 9.3
MMCV CUDA Compiler: 10.1
MMDetection: 2.4.0
MMDetection3D: 0.5.0+1312430
pip list | grep mmdet
outputs
mmdet 2.4.0
mmdet3d 0.5.0
I follow the get_start.md, but when I run the last command pip install -v -e .
, I got the following error.
I have already run
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
pip install -r requirements/build.txt
Error:
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Hi, I want to know whether there is the implementation of pointnet in this project?
pytorch 1.4 can support tensorboard ?
tensorboard can show the network structure ?
i want to get the network structure of second or pointpillars ,
can you help me ?
Hi! I have only PCD files with outdoor scenes(buildings). Can you give me some directions how to start with mmdetection3d? Please
For the config file: hv_pointpillars_fpn_sbn-all_free-anchor_4x8_2x_nus-3d.py
The report result is: mAP-43.7 NDS-55.3
However our reproduce result is only: mAP-14.9 NDS-27.6
The only change is (because we don't have the sufficient memory)
samples_per_gpu=4
->
samples_per_gpu=2
So, does the small batch_size will decrease the accuracy?
Cannot use the CosineAnealing
policy in lr_config
. Hope for your help!
2020-08-06 15:21:45,303 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:25:08) [GCC 7.5.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda-10.0
NVCC: Cuda compilation tools, release 10.0, V10.0.130
GPU 0,1: TITAN X (Pascal)
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.4.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- Intel(R) Math Kernel Library Version 2020.0.1 Product Build 20200208 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CUDA Runtime 10.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.1
- Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.5.0
OpenCV: 4.3.0
MMCV: 1.0.5
MMDetection: 2.3.0
MMDetection3D: 0.5.0+6526459
MMDetection3D Compiler: GCC 5.4
MMDetection3D CUDA Compiler: 10.0
CUDA_VISIBLE_DEVICES=2,3 PORT=29510 tools/dist_train.sh configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py 2
Traceback (most recent call last):
File "tools/train.py", line 166, in <module>
main()
File "tools/train.py", line 162, in main
meta=meta)
File "~/proj/3det/mmdetection/mmdet/apis/train.py", line 108, in train_detector
cfg.get('momentum_config', None))
File "~/proj/3det/mmcv/mmcv/runner/base_runner.py", line 412, in register_training_hooks
self.register_lr_hook(lr_config)
File "~/proj/3det/mmcv/mmcv/runner/base_runner.py", line 342, in register_lr_hook
hook = mmcv.build_from_cfg(lr_config, HOOKS)
File "~/proj/3det/mmcv/mmcv/utils/registry.py", line 157, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'CosineAnealingLrUpdaterHook is not in the hook registry'
Traceback (most recent call last):
File "tools/train.py", line 166, in <module>
main()
File "tools/train.py", line 162, in main
meta=meta)
File "~/proj/3det/mmdetection/mmdet/apis/train.py", line 108, in train_detector
cfg.get('momentum_config', None))
File "~/proj/3det/mmcv/mmcv/runner/base_runner.py", line 412, in register_training_hooks
self.register_lr_hook(lr_config)
File "~/proj/3det/mmcv/mmcv/runner/base_runner.py", line 342, in register_lr_hook
hook = mmcv.build_from_cfg(lr_config, HOOKS)
File "~/proj/3det/mmcv/mmcv/utils/registry.py", line 157, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'CosineAnealingLrUpdaterHook is not in the hook registry'
Traceback (most recent call last):
File "~/anaconda3/envs/mm3det/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "~/anaconda3/envs/mm3det/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "~/anaconda3/envs/mm3det/lib/python3.7/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "~/anaconda3/envs/mm3det/lib/python3.7/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['~/anaconda3/envs/mm3det/bin/python', '-u', 'tools/train.py', '--local_rank=1', 'configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py', '--launcher', 'pytorch']' returned non-zero exit status 1.
Describe the feature
The script or code is required to visualize the validation and test result on image and LiDAR, like mayavi, open3d, etc.
Motivation
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....].
Ex2. There is a recent paper [....], which is very helpful for [....].
It is intuitive that we observe the detection performance when testing the model, I think.
Related resources
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
Sorry, I don't have good idea. When reading other references, I envy so beautiful figures attached in the paper but don't know how to get them.
Additional context
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
No, thanks! And the aim of this issue, is that I want to consult about the time you release the code.
Thanks again for your great works!
Thanks for your error report and we appreciate it a lot.
Checklist
Describe the bug
Training on single GPU, when using default gpu (gpu:0) , everything is ok.
Switch to gpu:1, report an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121
during inference, however training is ok.
Reproduction
python tools/train.py CONFIG_PATH --gpu-ids 1
Environment
python mmdet3d/utils/collect_env.py
to collect necessary environment infomation and paste it here.$PATH
, $LD_LIBRARY_PATH
, $PYTHONPATH
, etc.)Error traceback
If applicable, paste the error trackback here.
A placeholder for trackback.
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
hi, thanks for the work. I am trying to use the mmdet3d to detect the point cloud from the vSLAM without camera image. How could I train the model? Are there any suggestions? Because I find the tools/create_data.py has many funcs relied on the camera 2d boxes.
Thanks
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
Hi,
Following the instruction, I am trying to download the SECOND pretrained model to test KITTI Dataset. However, I cannot download it by simply replacing the aws url with aliyun url.
Are there any suggestions?
I modified it to:
https://open-mmlab.oss-cn-beijing.aliyuncs.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
(failed to download)
Thanks
Thanks for the great project.
While it says that the project is released under Apache license, the file is missing..
Describe the feature
Motivation
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....].
Ex2. There is a recent paper [....], which is very helpful for [....].
Related resources
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
Additional context
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
Hi,
The documentation has a problem.
Under Point cloud demo in the documentation the correct command line is:
python demo/pcd_demo.py demo/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
(currently it says to use: hv_second_secfpn_6x8_80e_kitti-3d-3class_20200620_230238-9208083a.pth -> this is not the right one).
The I also tried a point cloud ply (sun3d dataset):
python demo/pcd_demo.py demo/cloud_bin_109.ply configs/votenet/votenet_16x8_sunrgbd-3d-10class.py checkpoints/votenet_16x8_sunrgbd-3d-10class_20200620_230238-4483c0c0.pth
I had the this error:
warnings.warn('ConvModule has norm and bias at the same time')
Traceback (most recent call last):
File "demo/pcd_demo.py", line 28, in
main()
File "demo/pcd_demo.py", line 22, in main
result, data = inference_detector(model, args.pcd)
File "/mnt/disk_1tb/research/mmdetection3d/mmdet3d/apis/inference.py", line 73, in inference_detector
data = test_pipeline(data)
File "/mnt/disk_1tb/research/mmdetection3d/venv/lib/python3.6/site-packages/mmdet/datasets/pipelines/compose.py", line 40, in call
data = t(data)
File "/mnt/disk_1tb/research/mmdetection3d/mmdet3d/datasets/pipelines/loading.py", line 306, in call
points = self._load_points(pts_filename)
File "/mnt/disk_1tb/research/mmdetection3d/mmdet3d/datasets/pipelines/loading.py", line 284, in _load_points
points = np.frombuffer(pts_bytes, dtype=np.float32)
ValueError: buffer size must be a multiple of element size
Thank you
mmdetection3d/configs/_base_/datasets/nus-3d.py
Lines 35 to 38 in a5daf20
As titled. From the README it seems that OpenPCDet is or will be a subset of mmdetection3d. Will you finally merge this two repo or will support different features? Thanks!
Hi,
Where does the create_data.py
expect the kitti dataset to be stored? And does it need to be modified to a specific folder structure? Is it necessary to download all of KITTI? The docs are not very clear. From what I can understand the data needs to be split into training and test folders for the prepare script, but i thought that is what it was supposed to do for me.
Running python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
with the dataset in data/kitti/2011_09_26/
results in
ValueError: file not exist: training/velodyne/000000.bin
but the KITTI dataset never includes a training
folder
I have installed mmdet and mmdet3d according to the instructions, but I encountered the following problems when preparing kitti data.
I run this line:
$ python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
And I got this error:
assert pycocotools.version >= '12.0.2' AttributeError: module 'pycocotools' has no attribute 'version'
I can't find a solution on the Internet, I hope that you could give me a hint.
Hi, authors, thanks for your great work.
I want to evaluate my model on KITTI test data, so how can I get the txt format file to submit to the official of KITTI website for evaluation?
Also, I cannot intuitively understand how the performance of my model is during model training and validation, and it just generates the pkl result file. Does any code provide for visualization usage, except for MeshLab?
when i try to train with this command:
python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py
i get the error bellow:
Traceback (most recent call last):
File "tools/train.py", line 166, in
main()
File "tools/train.py", line 155, in main
{'image_idx': '006740', 'image_shape': array([ 375, 1242], dtype=int32)}
train_detector(
File "/home/john/mmdetection/mmdet/apis/train.py", line 143, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/john/anaconda3/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/john/anaconda3/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 27, in train
for i, data_batch in enumerate(data_loader):
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in next
data = self._next_data()
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/john/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/john/mmdetection/mmdet/datasets/dataset_wrappers.py", line 78, in getitem
return self.dataset[idx % self._ori_len]
File "/home/john/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 293, in getitem
data = self.prepare_train_data(idx)
File "/home/john/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 146, in prepare_train_data
input_dict = self.get_data_info(index)
File "/home/john/mmdetection3d/mmdet3d/datasets/kitti_dataset.py", line 111, in get_data_info
info['image']['image_path'])
KeyError: 'image_path'
Hello, this code can visualize the 3D detection box in the point cloud, may I ask if the 3D detection box can be visualized in the 2D image?
Hi, i have got exactly the same error as described and solved in #115 but now for test.py
Running this command:
python tools/test.py \
configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py \
checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth \
--show --show-dir data/kitti
results in:
File "tools/test.py", line 152, in <module>
main()
File "tools/test.py", line 130, in main
outputs = single_gpu_test(model, data_loader, args.show, args.show_dir)
File "/home/msa/Documents/repos/mmdetection3d/mmdetection3d/mmdet3d/apis/test.py", line 32, in single_gpu_test
model.module.show_results(data, result, out_dir)
File "/home/msa/Documents/repos/mmdetection3d/mmdetection3d/mmdet3d/models/detectors/base.py", line 89, in show_results
pred_bboxes = copy.deepcopy(result['boxes_3d'].tensor.numpy())
TypeError: list indices must be integers or slices, not str
Environment and setup exactly like in #115 but with pulled fixes commited in #116
Applying same fix as in pull request #116 solves the issue:
changed to:
pred_bboxes = copy.deepcopy(result[0]['boxes_3d'].tensor.numpy())
A pre-trained model for the lyft dataset would be much appreciated. Thank you!
Hi, you have provided dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class_20200621_003904-10140f2d.pth , but do you have the prertrained_model : MVXFasterRCNN with hard voxel? Could you please provide a copy?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.