Giter Club home page Giter Club logo

kitti_object_vis's Introduction

KITTI Object data transformation and visualization

Dataset

Download the data (calib, image_2, label_2, velodyne) from Kitti Object Detection Dataset and place it in your data folder at kitti/object

The folder structure is as following:

kitti
    object
        testing
            calib
               000000.txt
            image_2
               000000.png
            label_2
               000000.txt
            velodyne
               000000.bin
            pred
               000000.txt
        training
            calib
               000000.txt
            image_2
               000000.png
            label_2
               000000.txt
            velodyne
               000000.bin
            pred
               000000.txt

Install locally on a Ubuntu 16.04 PC with GUI

  • start from a new conda enviornment:
(base)$ conda create -n kitti_vis python=3.7 # vtk does not support python 3.8
(base)$ conda activate kitti_vis
  • opencv, pillow, scipy, matplotlib
(kitti_vis)$ pip install opencv-python pillow scipy matplotlib
  • install mayavi from conda-forge, this installs vtk and pyqt5 automatically
(kitti_vis)$ conda install mayavi -c conda-forge
  • test installation
(kitti_vis)$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis

Note: the above installation has been tested not work on MacOS.

Install remotely

Please refer to the jupyter folder for installing on a remote server and visulizing in Jupyter Notebook.

Visualization

  1. 3D boxes on LiDar point cloud in volumetric mode
  2. 2D and 3D boxes on Camera image
  3. 2D boxes on LiDar Birdview
  4. LiDar data on Camera image
$ python kitti_object.py --help
usage: kitti_object.py [-h] [-d N] [-i N] [-p] [-s] [-l N] [-e N] [-r N]
                       [--gen_depth] [--vis] [--depth] [--img_fov]
                       [--const_box] [--save_depth] [--pc_label]
                       [--show_lidar_on_image] [--show_lidar_with_depth]
                       [--show_image_with_boxes]
                       [--show_lidar_topview_with_boxes]

KIITI Object Visualization

optional arguments:
  -h, --help            show this help message and exit
  -d N, --dir N         input (default: data/object)
  -i N, --ind N         input (default: data/object)
  -p, --pred            show predict results
  -s, --stat            stat the w/h/l of point cloud in gt bbox
  -l N, --lidar N       velodyne dir (default: velodyne)
  -e N, --depthdir N    depth dir (default: depth)
  -r N, --preddir N     predicted boxes (default: pred)
  --gen_depth           generate depth
  --vis                 show images
  --depth               load depth
  --img_fov             front view mapping
  --const_box           constraint box
  --save_depth          save depth into file
  --pc_label            5-verctor lidar, pc with label
  --show_lidar_on_image
                        project lidar on image
  --show_lidar_with_depth
                        --show_lidar, depth is supported
  --show_image_with_boxes
                        show lidar
  --show_lidar_topview_with_boxes
                        show lidar topview
  --split               use training split or testing split (default: training)
$ python kitti_object.py

Specific your own folder,

$ python kitti_object.py -d /path/to/kitti/object

Show LiDAR only

$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis

Show LiDAR and image

$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes

Show LiDAR and image with specific index

$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes --ind 1 

Show LiDAR with modified LiDAR file with an additional point cloud label/marker as the 5th dimention(5 vector: x, y, z, intensity, pc_label). (This option is for very specific case. If you don't have this type of data, don't use this option).

$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --pc_label

Demo

2D, 3D boxes and LiDar data on Camera image

2D, 3D boxes LiDar data on Camera image

boxes with class label

Credit: @yuanzhenxun

LiDar birdview and point cloud (3D)

LiDar point cloud and birdview

Show Predicted Results

Firstly, map KITTI official formated results into data directory

./map_pred.sh /path/to/results
python kitti_object.py -p --vis

Show Predicted Results

Acknowlegement

Code is mainly from f-pointnet and MV3D

kitti_object_vis's People

Contributors

arxidinakbar avatar dashidhy avatar fukatani avatar haditab avatar kuixu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kitti_object_vis's Issues

moving format

is it possible to iterate from index 1 to another index to form moving image???

The problem of 8 points in "show_image_with_boxes"?

Hello, I am "show_image_with_boxes" function, for the coordinates of 3 d coordinates will be eight points, but a bit of a problem for order, for the Angle is 0.1, the first four points in the x coordinate of the 2.44 2.43 1.23 1.24, this should mean that the first point is in the x and z plane under the lower right corner point, but the first point and the coordinates of the corresponding z is 8.64, and the coordinates of the other three points 8.16, 8.17, 8.655, so the first point is not the obvious,After integrating the coordinates of x and z of each point, it seems that they do not correspond to each other. May I ask why?

question about transformation from image to camera coord

In the code below, b_x is the baseline from camera #i to camera #0. it is not related to this transformation, why you add it to the focal length?

x = ((uv_depth[:, 0] - self.c_u) * uv_depth[:, 2]) / self.f_u + self.b_x

or I guess the b_x or b_y is the distance between LIDAR and cam0? so need to add b_x to extend focal length to match the similar triangle?

image

Could not import backend for traitsui. Make sure you have a suitable UI toolkit like PyQt/PySide or wxPython

I installed the required libraries and am getting the following error when I try to run 'python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis' command:

Traceback (most recent call last):
  File "kitti_object.py", line 943, in <module>
    dataset_viz(args.dir, args)
  File "kitti_object.py", line 706, in dataset_viz
    figure=None, bgcolor=(0, 0, 0), fgcolor=None, engine=None, size=(1000, 500)
  File "/home/rajat/Desktop/Radspot/kitti_object_vis/venv/lib/python3.6/site-packages/mayavi/tools/figure.py", line 64, in figure
    engine = get_engine()
  File "/home/rajat/Desktop/Radspot/kitti_object_vis/venv/lib/python3.6/site-packages/mayavi/tools/engine_manager.py", line 92, in get_engine
    return self.new_engine()
  File "/home/rajat/Desktop/Radspot/kitti_object_vis/venv/lib/python3.6/site-packages/mayavi/tools/engine_manager.py", line 137, in new_engine
    check_backend()
  File "/home/rajat/Desktop/Radspot/kitti_object_vis/venv/lib/python3.6/site-packages/mayavi/tools/engine_manager.py", line 40, in check_backend
    raise ImportError(msg)
ImportError: Could not import backend for traitsui.  Make sure you
        have a suitable UI toolkit like PyQt/PySide or wxPython

Could you please help me with this? I have installed the packages that you have mentioned in your readme, is there any other package that we need in order to run this code?

what is meaning of '0'

hello, i wonder what the meaning of label '0' above the 3D bbox is, it seems both cars and people have the same labels.

Could you choose any license?

Hi, thank you for share wonderful library!

By the way, how about licensing this repository?

In github, default lisence is too strict and no one can reproduce (including git clone) or create derivative works from your work.
(Please https://help.github.com/en/articles/licensing-a-repository)
So github strongly encourage you to include an open source license such as MIT, BSD, or so on.

I think this repository become more useful if it is licensed.

run "Show LiDAR with label (5 vector)" is have problem

Hi,kuixu:
I run the command " $ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --pc_label " , then occur the fault as following:
data/object training
no pred file
data/object/training/velodyne/000000.bin
Traceback (most recent call last):
File "kitti_object.py", line 734, in
dataset_viz(args.dir, args)
File "kitti_object.py", line 605, in dataset_viz
pc_velo = dataset.get_lidar(data_idx, dtype, n_vec)[:,0:n_vec]
File "kitti_object.py", line 74, in get_lidar
return utils.load_velo_scan(lidar_filename, dtype, n_vec)
File "/home/wushengyu/kitti_object_vis/kitti_util.py", line 389, in load_velo_scan
scan = scan.reshape((-1, n_vec))
ValueError: cannot reshape array of size 461536 into shape (5)

Error in drawing 3D boxes on the image

Hello there
I have trouble running the code in some cases when it tries to draw 3d boxes on the image.
line 655, in draw_projected_box3d:
qs = qs.astype(np.int32)
AttributeError: 'NoneType' object has no attribute 'astype'

Thank you!

show lidar points in img (only in label box)

hi , I have check your code

And I wanna extract the points that is labeled in lidar 3D box

so, I can use --show_lidar_on_image , then show the image only have 2D points on labeled objects

Can you give me some advice or hint

support for 3d bbox preds in images

Hello,
First, thanks for the amazing library. I noticed i can get predictions with LiDAR, but on images I can only get ground truths but no 3D predictions.

I noticed the function def show_image_with_boxes_3type(img, objects, calib, objects2d, name, objects_pred): was commented out, and if i added it as an arg, it would get me the predictions in 2D as well.

Any particular reason it got commented out? I think by just adding some modifications from the function show_image_with_boxes I can get it to work, but was just curious about it.

Thanks.

How to disable gui for mlab.show()?

Hi again

Here is more of a question than issue. I was wondering how to run the code without having to use keyboard keys each time a new pc view in mayavi mlab shows up. mlab.show() waits for user interaction to continue to the next frame. is there any way to disable this?
for example in plotlib I do plt.show(block=False) ..
i was wondering if there is anything similar to that here in mlab.show() so I can run the code over all images without user interaction?
thank you!

myself kitti

Dear author, thank you very much for your contribution. I have a question to ask you. My kitti dataset is made by myself. The index and quantity are different from the original kitti dataset. How can I visualize my kitti dataset and set the index? I would appreciate your reply.

"Show LiDAR only" mode problem

I have trouble running your code in "Show LiDAR only" mode. The Mayavi Scene shows nothing but black blank.
The error is:

ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797
vtkXOpenGLRenderWindow (0x3b208d0): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.

my environment is Xeon CPU + P5000 GPU + ubuntu16.04 + python3.6

How to get results with doubleflip?

Hello, thanks for your work. I can not train models with doubleflip. Here is the error message:"No such file or directory: '/datasets/nuscenes_dbinfos_10sweeps_withvelo_global.pkl.npy'". But I can not find this file when the nuscenes datasets finish generated. How can I get 'nuscenes_dbinfos_10sweeps_withvelo_global.pkl.npy'? Thanks!

how to deal the error : qt.qpa.plugin: Could not load the Qt platform plugin "xcb"

when i run kitti_object.py i get the error
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/zsder/anaconda3/envs/tensorflow/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
in debug log i get follow message:

Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/home/zsder/anaconda3/envs/tensorflow/bin/platforms" ...
loaded library "/home/zsder/anaconda3/envs/tensorflow/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so"
QObject::moveToThread: Current thread (0x55c738b00180) is not the object's thread (0x55c738fe29f0).
Cannot move to target thread (0x55c738b00180)

my work space:
ubuntu 20.04
python 3.7
opencv 4.4.0
pyqt 5.9.2
pyqt5 5.15.1

label_2 under testing

Kitti testing dataset does't have label info, right? Why there is a label_2 folder under testing?

suggestion on mayavi installation

The latest version of vtk 9.0.1 will make it unsuccessful while installing mayavi.
Here is my suggestion:

pip install vtk==8.1.2
conda install pyqt -c conda-forge
pip install mayavi

libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3

An error occurred when I executed the command:
python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis
report:
Traceback (most recent call last):
File "kitti_object.py", line 887, in
import mayavi.mlab as mlab
...
ImportError: /anaconda3/envs/kitti_vis/lib/python3.7/site-packages/vtkmodules/../../.././libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3

I tried to solve it by:
pip install h5py
but it didn't work

My environment :
Ubuntu 20.04
python 3.7
mayavi 4.7.2

Could you give me some advice? thanks!

Notebook demo mayavi display issue

Hello,

I was looking into your notebook demo and found that if mayavi was initialized (in the 2nd cell) with
mlab.init_notebook(backend='ipy')
the lidar display (3rd cell) will not show up and appears as a line of text instead.

Using 'png' as the backend works and displays it as an image. (is it intended to be displayed as an image or an interactive figure?)
Using 'x3d' would display nothing.

I am using mayavi 4.7.1 and python 3.5.6.

DontCare issue

Hi there, I used the KITTI dataset and I wanted to run the demo, but it appears could not convert string ti float :"DontCare". Does anyone met this problem ?

3D bbox in LiDAR TO 3D bbox in RGB

Hello,
Thanks for good job. I want to know how to generate 3D bbox coords in 2D images?
i have the 3D bbox coord in LiDAR coords ([[ 8.7616, 12.5453, -1.0155, 4.0097, 1.5478, 1.3678, 6.3180], ...])(N, 7), and i don't know how to project to 2D images.
Hope for reply,
thanks.

Fontconfig error: failed reading config file

Type, truncation, occlusion, alpha: Pedestrian, 0, 0, -0.200000
2d bbox (x0,y0,x1,y1): 712.400000, 143.000000, 810.730000, 307.920000
3d bbox h,w,l: 1.890000, 0.480000, 1.200000
3d bbox location, ry: (1.840000, 1.470000, 8.410000), 0.010000
Difficulty of estimation: Easy
Fontconfig error: failed reading config file

(python:8848): GLib-GObject-WARNING **: cannot register existing type 'GdkDisplayManager'

(python:8848): GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed

(python:8848): GLib-GObject-CRITICAL **: g_object_new: assertion 'G_TYPE_IS_OBJECT (object_type)' failed

(python:8848): GLib-GObject-WARNING **: invalid (NULL) pointer instance

(python:8848): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed

(python:8848): GLib-GObject-WARNING **: invalid (NULL) pointer instance

(python:8848): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed

(python:8848): GLib-GObject-WARNING **: cannot register existing type 'GdkDisplay'

(python:8848): GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed

(python:8848): GLib-GObject-CRITICAL **: g_type_register_static: assertion 'parent_type > 0' failed

(python:8848): GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed

(python:8848): GLib-GObject-CRITICAL **: g_object_new: assertion 'G_TYPE_IS_OBJECT (object_type)' failed
Segmentation fault (core dumped)

kitti-test如何使用测试集进行生成box呢

你好,作者:

kitti数据集testing的文件结构和github给的一样,我把自己的测试集结果pred 传上去之后执行python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes --ind 1 --pred 会报错:Traceback (most recent call last): File "kitti_object.py", line 979, in assert os.path.exists(args.dir + "/" + args.split + "/pred") AssertionError该怎么修改呢?

libGL error: MESA-LOADER: failed to open crocus/swrast

When I run this command below:
$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis
I have received this error:

data/object training
libGL error: MESA-LOADER: failed to open crocus: /usr/lib/dri/crocus_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: crocus
libGL error: MESA-LOADER: failed to open crocus: /usr/lib/dri/crocus_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: crocus
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
no pred file
data/object/training/velodyne/000000.bin
0 image shape: (370, 1224, 3)
0 velo shape: (115384, 4)
======== Objects in Ground Truth ========
=== 1 object ===
Type, truncation, occlusion, alpha: Pedestrian, 0, 0, -0.200000
2d bbox (x0,y0,x1,y1): 712.400000, 143.000000, 810.730000, 307.920000
3d bbox h,w,l: 1.890000, 0.480000, 1.200000
3d bbox location, ry: (1.840000, 1.470000, 8.410000), 0.010000
Difficulty of estimation: Easy
('All point num: ', 115384)
('FOV point num: ', (20285, 4))
pc_velo (20285, 4)
==================== (20285, 4)
box3d_pts_3d_velo:
[[ 8.96440459 -2.45859462 -1.60867186]
[ 8.48444353 -2.45306157 -1.60607081]
[ 8.49835837 -1.25324031 -1.59072653]
[ 8.97831943 -1.25877336 -1.59332758]
[ 8.97436611 -2.48287864 0.28114573]
[ 8.49440505 -2.47734559 0.28374677]
[ 8.50831989 -1.27752433 0.29909105]
[ 8.98828095 -1.28305738 0.29649 ]]
2022-11-16 15:40:04.753 ( 3,183s) [ CE9DB440]vtkOpenGLRenderWindow.c:493 ERR| vtkEGLRenderWindow (0x2b22f20): GLEW could not be initialized: Missing GL version

How can I fix this issue? Thanks for your help.

add -p no image

python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes --ind 100 -p
then no image

can't get 3dbox in image

$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes
just get point cloud image. 3dbox 2dbox display just white
How can i 3dbos in image??
image

Own testing data --pred

what a great work , Thanks!

I have own test data and created a testing file , there are calib image_2 pred and velodyne in the file.

Screenshot from 2021-03-23 14-45-33

Screenshot from 2021-03-23 14-46-08

And I run python kitti_object.py --split testing --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes_

The terminal show : QObject::moveToThread: Current thread (0x564e35f500f0) is not the object's thread (0x564e345b11f0).
Cannot move to target thread (0x564e35f500f0)

And the result in the image and Point Cloud didn't show the label and box.

Is't must to have label file(GT)? or I did something wrog.

Overall , How to visualize the pred in the testing file ?

Can you give me some advice? Grateful!

data format error

In file "kitti_util.py",
Function:
def load_velo_scan(velo_filename, n_vec):
scan = np.fromfile(velo_filename, dtype=np.float64)
scan = scan.reshape((-1, n_vec))
return scan

"np.float64" is wrong, should be "np.float32"

Assertion Error

When I try to run
python kitti_object.py --pred
I get following error:
Traceback (most recent call last):
File "kitti_object.py", line 945, in
assert os.path.exists(args.dir + "/" + args.split + "/pred")
AssertionError

I also mapped to my results:
./map_pred.sh /data

The mapping might be wrong but I don´t know which format the above Code needs as input.

Can't visualize the predicted result

I am trying to visualize the predicted result by running this command but it is blocked at the end :

python kitti_object.py -p --vis
data/object/training/pred
data/object training
data/object/training/velodyne/000000.bin
0 image shape: (370, 1224, 3)
0 velo shape: (115384, 4)
======== Objects in Ground Truth ========
=== 1 object ===
Type, truncation, occlusion, alpha: Pedestrian, 0, 0, -0.200000
2d bbox (x0,y0,x1,y1): 712.400000, 143.000000, 810.730000, 307.920000
3d bbox h,w,l: 1.890000, 0.480000, 1.200000
3d bbox location, ry: (1.840000, 1.470000, 8.410000), 0.010000
Difficulty of estimation: Easy

ImportError: Could not import backend for traitsui

Environments:
OS: ubuntu20.04
conda: 4.13.0
mayavi: 4.7.4
vtk: 9.1.0
python:3.9.6

Error messages:

(py1121) hitbuyi@hitbuyi-Dell-G15-5511:~/PycharmProjects/pytorch_project/3D_vis$ python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis
data/object training
Traceback (most recent call last):
  File "/home/hitbuyi/PycharmProjects/pytorch_project/3D_vis/kitti_object.py", line 982, in <module>
    dataset_viz(args.dir, args)
  File "/home/hitbuyi/PycharmProjects/pytorch_project/3D_vis/kitti_object.py", line 744, in dataset_viz
    fig = mlab.figure(
  File "/home/hitbuyi/.conda/envs/py1121/lib/python3.9/site-packages/mayavi/tools/figure.py", line 64, in figure
    engine = get_engine()
  File "/home/hitbuyi/.conda/envs/py1121/lib/python3.9/site-packages/mayavi/tools/engine_manager.py", line 92, in get_engine
    return self.new_engine()
  File "/home/hitbuyi/.conda/envs/py1121/lib/python3.9/site-packages/mayavi/tools/engine_manager.py", line 137, in new_engine
    check_backend()
  File "/home/hitbuyi/.conda/envs/py1121/lib/python3.9/site-packages/mayavi/tools/engine_manager.py", line 40, in check_backend
    raise ImportError(msg)
ImportError: Could not import backend for traitsui.  Make sure you
        have a suitable UI toolkit like PyQt/PySide or wxPython
        installed

what's the problem?

segmentation fault(core dumped)

Hi, I encountered the problem of core dumped when I use '--vis' parameter to show the result of velodyne projected to image. What cause the problem? Please help me, and thanks in advance.
Screenshot from 2020-09-15 15-51-17

IndexError: list index out of range

When I run:
python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes --ind 1 -p

the error occured:
traceback (most recent call last):
File "kitti_object.py", line 982, in
dataset_viz(args.dir, args)
File "kitti_object.py", line 761, in dataset_viz
objects_pred = dataset.get_pred_objects(data_idx)
File "kitti_object.py", line 91, in get_pred_objects
return utils.read_label(pred_filename)
File "/kitti_object_vis-master/kitti_util.py", line 376, in read_label
objects = [Object3d(line) for line in lines]
File "/kitti_object_vis-master/kitti_util.py", line 376, in
objects = [Object3d(line) for line in lines]
File "/kitti_object_vis-master/kitti_util.py", line 64, in init
self.alpha = data[3] # object observation angle [-pi..pi]
IndexError: list index out of range

Could someone give me some advice,thanks!

显示cv2.imshow(2dbox) cv2.imshow(3dbox)出错

指令:python kitti_object.py -d /home/whut-4/Desktop/HXB/dataset/2017_KITTI_DATASET_ROOT --show_lidar_with_depth --img_fov --const_box --show_image_with_boxes -p --vis
Screenshot from 2023-05-12 11-21-02
可以看到第一轮显示的2dbox和3dbox 不关闭 新出现的2dbox和3dbox 尺寸太小而且没有内容

Traits error

Why do I get this error? I did every step perfectly :(
Traceback (most recent call last): File "kitti_object.py", line 734, in <module> dataset_viz(args.dir, args) File "kitti_object.py", line 646, in dataset_viz objects_pred, depth, img, constraint_box=args.const_box, save=args.save_depth, pc_label=args.pc_label) File "kitti_object.py", line 323, in show_lidar_with_depth draw_lidar(pc_velo, fig=fig, pc_label=pc_label) File "/home/azri/obj_det/viz_util.py", line 139, in draw_lidar mlab.points3d(pc[:,0], pc[:,1], pc[:,2], color, color=pts_color, mode=pts_mode, colormap = 'gnuplot', scale_factor=pts_scale, figure=fig) File "/usr/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 34, in the_function return pipeline(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 79, in __call__ output = self.__call_internal__(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 175, in __call_internal__ g = Pipeline.__call_internal__(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 93, in __call_internal__ return self.build_pipeline() File "/usr/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 121, in build_pipeline object = pipe(object, **this_kwargs)._target File "/usr/lib/python2.7/dist-packages/mayavi/tools/modules.py", line 154, in __init__ super(DataModuleFactory, self).__init__(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/mayavi/tools/pipe_base.py", line 161, in __init__ self.set(**traits) File "/usr/lib/python2.7/dist-packages/mayavi/tools/pipe_base.py", line 169, in set **traits) File "/usr/lib/python2.7/dist-packages/traits/has_traits.py", line 1952, in trait_set setattr( self, name, value ) File "/usr/lib/python2.7/dist-packages/traits/trait_handlers.py", line 170, in error value ) traits.trait_errors.TraitError: The 'colormap' trait of a GlyphFactory instance must be 'Accent' or 'Blues' or 'BrBG' or 'BuGn' or 'BuPu' or 'Dark2' or 'GnBu' or 'Greens' or 'Greys' or 'OrRd' or 'Oranges' or 'PRGn' or 'Paired' or 'Pastel1' or 'Pastel2' or 'PiYG' or 'PuBu' or 'PuBuGn' or 'PuOr' or 'PuRd' or 'Purples' or 'RdBu' or 'RdGy' or 'RdPu' or 'RdYlBu' or 'RdYlGn' or 'Reds' or 'Set1' or 'Set2' or 'Set3' or 'Spectral' or 'YlGn' or 'YlGnBu' or 'YlOrBr' or 'YlOrRd' or 'autumn' or 'binary' or 'black-white' or 'blue-red' or 'bone' or 'cool' or 'copper' or 'file' or 'flag' or 'gist_earth' or 'gist_gray' or 'gist_heat' or 'gist_ncar' or 'gist_rainbow' or 'gist_stern' or 'gist_yarg' or 'gray' or 'hot' or 'hsv' or 'jet' or 'pink' or 'prism' or 'spectral' or 'spring' or 'summer' or 'winter', but a value of 'gnuplot' <type 'str'> was specified.

AttributeError:NoneType' object has no attribute 'astype'

When I press enter to view the 201st image, the following error will be reported to me.

qs = qs.astype(np.int32)
AttributeError:NoneType' object has no attribute 'astype'

After I deleted 201, I encountered the same error at 218. But the previous pictures can be viewed normally. How should I solve it. thank you.
By the way,Is there any way to continue to view the picture? When I encounter an error, I can only start from the first one again. Can I continue to see from the specified picture

360 view

Thank you for your excellent job!

I am wondering is it possible to have a 360-degree view?

Looking forward to hearing from you soon

The point cloud can not divide 5

Which means this code can not be done:

def load_velo_scan(velo_filename):
    scan = np.fromfile(velo_filename, dtype=np.float64)
    print(scan)
    print(scan.shape)
    scan = scan.reshape((-1, 5))
    return scan

The point are 4x value, can not divide by 5, how did you show it when reshape? this code are just broken

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.