Giter Club home page Giter Club logo

pdv's Issues

RuntimeError: The expanded size of the tensor (8) must match the existing size (10) at non-singleton dimension 1. Target sizes: [128, 8]. Tensor sizes: [128, 10]

when I train PDV on waymo dataset, I meet this error.
The yaml file is "cfgs/waymo_models/pdv.yaml". I have cost much time on this.

error details:
Traceback (most recent call last):
File "train.py", line 232, in
main()
File "train.py", line 176, in main
train_model(
File "/home/rcvlab/lly/PDV/tools/train_utils/train_utils.py", line 86, in train_model
accumulated_iter = train_one_epoch(
File "/home/rcvlab/lly/PDV/tools/train_utils/train_utils.py", line 38, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/rcvlab/lly/PDV/pcdet/models/init.py", line 42, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/rcvlab/anaconda3/envs/lly_torch17/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/rcvlab/lly/PDV/pcdet/models/detectors/pdv.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/rcvlab/anaconda3/envs/lly_torch17/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/rcvlab/lly/PDV/pcdet/models/roi_heads/pdv_head.py", line 239, in forward
targets_dict = self.assign_targets(batch_dict)
File "/home/rcvlab/lly/PDV/pcdet/models/roi_heads/roi_head_template.py", line 110, in assign_targets
targets_dict = self.proposal_target_layer.forward(batch_dict)
File "/home/rcvlab/lly/PDV/pcdet/models/roi_heads/target_assigner/proposal_target_layer.py", line 32, in forward
batch_rois, batch_gt_of_rois, batch_roi_ious, batch_roi_scores, batch_roi_labels = self.sample_rois_for_rcnn(
File "/home/rcvlab/lly/PDV/pcdet/models/roi_heads/target_assigner/proposal_target_layer.py", line 113, in sample_rois_for_rcnn
batch_gt_of_rois[index] = cur_gt[gt_assignment[sampled_inds]]
RuntimeError: The expanded size of the tensor (8) must match the existing size (10) at non-singleton dimension 1. Target sizes: [128, 8]. Tensor sizes: [128, 10]

About full validation results on the uploaded PDV weights

Hi, nice work, and thank you for sharing the code and a trained checkpoint.

Despite your kind walkthrough, I'm having difficulties running your code with the given model-147M weights.
I had some version issues regarding spconv, pytorch, and pcdet.
Then I encountered some parameter load failure problems. (I was able to train the model from scratch.)

Can you inform me full evaluation results I could achieve when I run your model?
Sharing your evaluation log (as below) would be more than sufficient.

INFO  Generate label finished(sec_per_example: 0.0109 second).
INFO  recall_roi_0.3: xxxxxxx
INFO  recall_rcnn_0.3: xxxxxxx
INFO  recall_roi_0.5: xxxxxxx
INFO  recall_rcnn_0.5: xxxxxxx
INFO  recall_roi_0.7: xxxxxxx
INFO  recall_rcnn_0.7: xxxxxxx
INFO  Average predicted number of objects(3769 samples): xxxxxxx
INFO  Car [email protected], 0.70, 0.70:
bbox AP:xxxxxxx, xxxxxxx, xxxxxxx
bev  AP:xxxxxxx, xxxxxxx, xxxxxxx
3d   AP:xxxxxxx, xxxxxxx, xxxxxxx
aos  AP:xxxxxxx, xxxxxxx, xxxxxxx
Car [email protected], 0.70, 0.70:
bbox AP:xxxxxxx, xxxxxxx, xxxxxxx
bev  AP:xxxxxxx, xxxxxxx, xxxxxxx
3d   AP:xxxxxxx, 85.05, xxxxxxx
aos  AP:xxxxxxx, xxxxxxx, xxxxxxx
.
.
.

Thank you!

codes running issues

Hi, I tried to tun your source code,but the following error occurred:“AttributeError: 'Tensor' object has no attribute 'isnan”
The wrong sentence is “points_out_of_range = ((xyz_local_grid < 0) | (xyz_local_grid >= grid_size) | (xyz_local_grid.isnan())) .any(-1).flatten()”in the folder “pcdet/utils/density_utils.py".

What is the cause and how to improve it?

The referenced results of PVRCNN(++)

Hi,
Nice work!
May I ask where the results of PVRCNN and PVRCNN++ come from? The numbers look a little bit strange to me since I can not find them in the original papers of PVRCNN(++).

About the cyclist on KITTI test set

Dear author: could you tell me how to reproduce your cyclist and pedstrian results in your paper? I wonder you only train 1 class seperately, or you train 3 class together in a single training process?

The results on the KITTI val.

Hi author!
I have a problem of the PDV's results on the KITTI val. I see the results of 3d detection performance for three classes(car, pedestrian, cyclist) of moderate difficulty on KITTI val are 85.05, 57.41, 75.95 in Github. But I find the results on paper are 85.29, 60.80, 74.23.
image
They are different! Could you tell me the differences between these two results?
Thanks!

ImportError: cannot import name 'VoxelGenerator' from 'spconv.utils'

when I do python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
the error like this
/mnt/Anaconda3/envs/PDV/lib/python3.8/runpy.py:127: RuntimeWarning: 'pcdet.datasets.kitti.kitti_dataset' found in sys.modules after import of package 'pcdet.datasets.kitti', but prior to execution of 'pcdet.datasets.kitti.kitti_dataset'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Traceback (most recent call last):
File "/mnt/wg/PDV/pcdet/datasets/processor/data_processor.py", line 50, in transform_points_to_voxels
from spconv.utils import VoxelGeneratorV2 as VoxelGenerator
ImportError: cannot import name 'VoxelGeneratorV2' from 'spconv.utils' (/mnt/Anaconda3/envs/PDV/lib/python3.8/site-packages/spconv/utils/init.py)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/Anaconda3/envs/PDV/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/mnt/Anaconda3/envs/PDV/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/wg/PDV/pcdet/datasets/kitti/kitti_dataset.py", line 501, in
create_kitti_infos(
File "/mnt/wg/PDV/pcdet/datasets/kitti/kitti_dataset.py", line 454, in create_kitti_infos
dataset = KittiDataset(dataset_cfg=dataset_cfg, class_names=class_names, root_path=data_path, training=False)
File "/mnt/wg/PDV/pcdet/datasets/kitti/kitti_dataset.py", line 23, in init
super().init(
File "/mnt/wg/PDV/pcdet/datasets/dataset.py", line 33, in init
self.data_processor = DataProcessor(
File "/mnt/wg/PDV/pcdet/datasets/processor/data_processor.py", line 17, in init
cur_processor = getattr(self, cur_cfg.NAME)(config=cur_cfg)
File "/mnt/wg/PDV/pcdet/datasets/processor/data_processor.py", line 52, in transform_points_to_voxels
from spconv.utils import VoxelGenerator
ImportError: cannot import name 'VoxelGenerator' from 'spconv.utils' (/mnt/Anaconda3/envs/PDV/lib/python3.8/site-packages/spconv/utils/init.py)

About training pdv on Waymo Open Dataset

Hello,authors. This is the first time I've trained a pdv on a waymo dataset, but the network loss changes are strange, and the loss of rcnn_rpn_cls is increasing. Is this normal? If not, what might be the cause of this? Could you please give me your training log of tensorboard for reference?
2023-04-27 11-31-06 的屏幕截图

question about kde implementation.

I find the following code in this repo:

return self.kde_func.log_prob(input).sum(-1).exp()

However, I think it should be performed exp before the sum operation.
Is there any problem about my understanding?

undefined symbol: _ZNK3c104Type14isSubtypeOfExtESt10shared_ptrIS0_EPSo

env as follow
CUDA 11.3
Pytorch 1.7.1
the error like
OSError: /mnt/Anaconda3/envs/PDVT/lib/python3.8/site-packages/spconv/libspconv.so: undefined symbol: _ZNK3c104Type14isSubtypeOfExtESt10shared_ptrIS0_EPSo
Does this mean that my spconv is not compatible with pytorch or cuda versions and that I have to install cuda10.2 ?
thanks

MeanOrientationVFE

I found it in mean_vfe.py, I wonder what it does and how can I use him

FPS

May I ask how to calculate the fps of the model

GLEW could not be initialized: Missing GL version

Hello, i've met a problem in running demo.py
2022-04-04 14:55:26.887 ( 15.100s) [ F0A55E80]vtkOpenGLRenderWindow.c:493 ERR| vtkEGLRenderWindow (0x561980223bb0): GLEW could not be initialized: Missing GL version
run-demo.sh: line 19: 7579 Segmentation fault (core dumped)
I've tried several approaches but all of them failed. Do you have any idea about this bug?

what is the role of MAX_NUM_BOXES?

Thanks for your work, I would like to know what is the role of MAX_NUM_BOXES, is it because the RoIs may have overlap, causing a point to fall into different RoI?

Train question

Hi, thanks for your wonderful job.Can you push a training log to verify our training results?
Thank you again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.