Giter Club home page Giter Club logo

dair-v2x's Introduction

DAIR-V2X and OpenDAIRV2X: Towards General and Real-World Cooperative Autonomous Driving



teaser

Table of Contents:

  1. Highlights
  2. News
  3. Dataset Download
  4. Getting Started
  5. Major Features
  6. Benchmark
  7. Citation
  8. Contaction

Highlights

  • DAIR-V2X: The first real-world dataset for research on vehicle-to-everything autonomous driving. It comprises a total of 71,254 frames of image data and 71,254 frames of point cloud data.
  • V2X-Seq: The first large-scale, real-world, and sequential V2X dataset, which includes data frames, trajectories, vector maps, and traffic lights captured from natural scenery. V2X-Seq comprises two parts: V2X-Seq-SPD (Sequential Perception Dataset), which includes more than 15,000 frames captured from 95 scenarios; V2X-Seq-TFD (Trajectory Forecasting Dataset), which contains about 80,000 infrastructure-view scenarios, 80,000 vehicle-view scenarios, and 50,000 cooperative-view scenarios captured from 28 intersections' areas, covering 672 hours of data.
  • OpenDAIR-V2X: An open-sourced framework for supporting the research on vehicle-to-everything autonomous driving.

News

  • [2024.04] 🔥 Our UniV2X available on arXiv. UniV2X is the first end-to-end framework that unifies all vital modules as well as diverse driving views into a network for cooperative autonomous driving. Code will be here.
  • [2024.03] 🔥 Our new Dataset RCooper, a real-world large-scale dataset for roadside cooperative perception, has been accepted by CVPR2024! Please follow RCooper for the latest news.
  • [2024.01] 🔥 Our QUEST has been been accpeted by ICRA2024.
  • [2023.10] We have released the code for V2X-Seq-SPD and V2X-Seq-TFD.
  • [2023.09] Our FFNET has been accpeted by Neurips2023.
  • [2023.05] V2X-Seq dataset is availale here. It can be unlimitedly downloaded within mainland China. Example dataset can be downloaded directly.
  • [2023.03] Our new dataset "V2X-Seq: A Large-Scale Sequential Dataset for Vehicle-Infrastructure Cooperative Perception and Forecasting" has been accepted by CVPR2023. Congratulations! We will release the dataset sooner. Please follow DAIR-V2X-Seq for the latest news.
  • [2023.03] We have released training code for our FFNET, and our OpenDAIRV2X now supports evaluating FFNET.
  • [2022.11] We have held the first VIC3D Object Detection challenge.
  • [2022.07] We have released the OpenDAIRV2X codebase v1.0.0. The current version can faciliate the researchers to use the DAIR-V2X dataset and reproduce the benchmarks.
  • [2022.03] Our Paper "DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection" has been accepted by CVPR2022. Arxiv version could be seen here.
  • [2022.02] DAIR-V2X dataset is availale here. It can be unlimitedly downloaded within mainland China.

Dataset Download

Getting Started

Please refer to getting_started.md for the usage and benchmarks reproduction of DAIR-V2X dataset.

Please refer to get_started_spd.md for the usage and benchmarks reproduction of V2X-Seq-SPD dataset.

Benchmark

You can find more benchmark in SV3D-Veh, SV3D-Inf, VIC3D and VIC3D-SPD.

Part of the VIC3D detection benchmarks based on DAIR-V2X-C dataset:

Modality Fusion Model Dataset AP-3D (IoU=0.5) AP-BEV (IoU=0.5) AB
Overall 0-30m 30-50m 50-100m Overall 0-30m 30-50m 50-100m
Image VehOnly ImvoxelNet VIC-Sync 9.13 19.06 5.23 0.41 10.96 21.93 7.28 0.78 0
Late-Fusion ImvoxelNet VIC-Sync 18.77 33.47 9.43 8.62 24.85 39.49 14.68 14.96 309.38
Pointcloud VehOnly PointPillars VIC-Sync 48.06 47.62 63.51 44.37 52.24 30.55 66.03 48.36 0
Early Fusion PointPillars VIC-Sync 62.61 64.82 68.68 56.57 68.91 68.92 73.64 65.66 1382275.75
Late-Fusion PointPillars VIC-Sync 56.06 55.69 68.44 53.60 62.06 61.52 72.53 60.57 478.61
Late-Fusion PointPillars VIC-Async-2 52.43 51.13 67.09 49.86 58.10 57.23 70.86 55.78 478.01
TCLF PointPillars VIC-Async-2 53.37 52.41 67.33 50.87 59.17 58.25 71.20 57.43 897.91

Part of the VIC3D detection and tracking benchmarks based on V2X-Seq-SPD:

Modality Fusion Model Dataset AP 3D (Iou=0.5) AP BEV (Iou=0.5) MOTA MOTP AMOTA AMOTP IDs AB(Byte)
Image Veh Only ImvoxelNet VIC-Sync-SPD 8.55 10.32 10.19 57.83 1.36 14.75 4
Image Late Fusion ImvoxelNet VIC-Sync-SPD 17.31 22.53 21.81 56.67 6.22 25.24 47 3300

TODO List

  • Dataset Release
  • Dataset API
  • Evaluation Code
  • All detection benchmarks based on DAIR-V2X dataset
  • Benchmarks for detection and tracking tasks with different fusion strategies for Image based on V2X-Seq-SPD dataset
  • All benchmarks for detection and tracking tasks based on V2X-Seq-SPD dataset

Citation

Please consider citing our paper if the project helps your research with the following BibTex:

@inproceedings{v2x-seq,
  title={V2X-Seq: A large-scale sequential dataset for vehicle-infrastructure cooperative perception and forecasting},
  author={Yu, Haibao and Yang, Wenxian and Ruan, Hongzhi and Yang, Zhenwei and Tang, Yingjuan and Gao, Xu and Hao, Xin and Shi, Yifeng and Pan, Yifeng and Sun, Ning and Song, Juan and Yuan, Jirui and Luo, Ping and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023},
}
@inproceedings{dair-v2x,
  title={Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection},
  author={Yu, Haibao and Luo, Yizhen and Shu, Mao and Huo, Yiyi and Yang, Zebang and Shi, Yifeng and Guo, Zhenglong and Li, Hanyu and Hu, Xing and Yuan, Jirui and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21361--21370},
  year={2022}
}

Contaction

If any questions and suggenstations, please email to [email protected].

Related Resources

Awesome

dair-v2x's People

Contributors

coutyou avatar haibao-yu avatar icycookies avatar jileimao avatar wenxian-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dair-v2x's Issues

Range of point clouds?

hi,everyone.
I have a question about how to set the range of the point cloud if I want to train in the openpcdet framework after converting this dataset to a kitti dataset.

In the kitti dataset, the range of the point cloud is POINT_CLOUD_RANGE: [0, -40, -3, 70.4, 40, 1]
Range of point clouds in openpcdet's pointpillars when using the kitti datasetPOINT_CLOUD_RANGE: [0, -39.68, -3, 69.12, 39.68, 1]

If I use V2X-I or V2X-V or V2X-C, how do I set the range of the point cloud?

Mis-alignment in the validation set on batch 54, 55

Hi -

We found that within for the validation dataset -
Based on the mapping information cooperative-vehicle-infrastructure/cooperative/data_info.json
Vehicle side only has batch_id 55 and 54. However on the Infra side only has batch_id 54.
I would appreciate some clarification on the missing label set 55 if that make sense.

Thanks!

Some issues regarding the evaluation of early point cloud fusion using pretrained models.

Hello ,sorry to bother you. I'm a novice in this field. I am trying to use your pre-training model for point cloud early fusion evaluation. The. sh file will delete the cache folder, but I can't find it. Do you want me to create a new cache file? In addition, it seems that the. sh file will call mmdet3d_ anymodel_ lidar_ Early.py creates a new folder in the cache folder so that it does not match the rm - r/ Does the cache conflict? Thank you for your patience!

Compatibility with popular codebase

Appreciate your work to provide the first real-world cooperative perception dataset! I am wondering whether there is any possibility of making the dataset compatible with the current popular codebases, such as OpenPCDet or MMdet3D?

Error about Create Kitti-format data for sv3d-inf

When I ran the following command:

python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/DAIR-V2X-I/single-infrastructure-side \
--target-root ./data/DAIR-V2X/DAIR-V2X-I/single-infrastructure-side \   --split-path ./data/split_datas/single-infrastructure-split-data.json \   --label-type lidar --sensor-view infrastructure

The error was encountered as follows:

================ Start to Convert ================
================ Start to Copy Raw Data ================
================ Start to Generate Label ================
Traceback (most recent call last):
  File "tools/dataset_converter/dair2kitti.py", line 80, in <module>
    json2kitti(json_root, kitti_label_root)
  File "/home/shb/open-mmlab/dair-v2x/tools/dataset_converter/gen_kitti/label_json2kitti.py", line 36, in json2kitti
    write_kitti_in_txt(my_json, path_txt)
  File "/home/shb/open-mmlab/dair-v2x/tools/dataset_converter/gen_kitti/label_json2kitti.py", line 22, in write_kitti_in_txt
    i15 = str(-item["rotation"])
TypeError: bad operand type for unary -: 'str'

empty obj when browse dataset in mmdet3d framework

Hi, thanks a lot for your great contribution in V2X area!
When I follow your instruction of late fusion, some problems occured.
I download the dataset from the official website, and convert the data using dair2kitti.py, when I evaluate the pretrained checkpoints you provided, the mAP result is correct.
Then I want to train the model in mmdet3d, so I create data using the command as followed
python tools/create_data.py kitti --root-path ~/code/DAIR-V2X-main/data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side/ --out-dir ~/code/DAIR-V2X-main/data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side/
and the following files are generated successfully
image
However, when I browse the dataset using command:
python tools/misc/browse_dataset.py configs/pointpillars/trainval_config_v.py --output-dir tools/misc/veh_side/ --task det
the output xxx_points.obj files are empty when visualize them like this:
image
but there is data like this when I read the files as text
image
and the xxx_gt.obj files are like this:
image
What could be the reason for this problem? I will appreciate it a lot if you could share your advice, thank you!

Error occured when reimplementing Mvxnet on sv3d-inf dataset

When I ran the command in mmdection3d environment as follows:

python tools/train.py /HOME/scz3687/run/openmmlab-0.17.1/dair-v2x/configs/sv3d-inf/mvxnet/trainval_config.py --work-dir /HOME/scz3687/run/openmmlab-0.17.1/dair-v2x/work_dirs/sv3d_inf_mvxnet

The error was encountered as follows:

2022-07-27 10:18:51,994 - mmdet - INFO - workflow: [('train', 1)], max: 40 epochs
/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py:34: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  if 'pcd_rotation' in img_meta else torch.eye(
2022-07-27 10:19:13,395 - mmdet - INFO - Epoch [1][50/10084]	lr: 4.323e-04, eta: 1 day, 18:54:39, time: 0.383, data_time: 0.073, memory: 3474, loss_cls: 0.9940, loss_bbox: 3.7010, loss_dir: 0.1496, loss: 4.8445, grad_norm: 202.8270
2022-07-27 10:19:25,671 - mmdet - INFO - Epoch [1][100/10084]	lr: 5.673e-04, eta: 1 day, 11:12:17, time: 0.246, data_time: 0.002, memory: 3474, loss_cls: 0.8204, loss_bbox: 1.6195, loss_dir: 0.1448, loss: 2.5846, grad_norm: 31.3928
2022-07-27 10:19:37,977 - mmdet - INFO - Epoch [1][150/10084]	lr: 7.023e-04, eta: 1 day, 8:39:17, time: 0.246, data_time: 0.002, memory: 3474, loss_cls: 0.8210, loss_bbox: 1.4106, loss_dir: 0.1391, loss: 2.3707, grad_norm: 25.7827
2022-07-27 10:19:50,285 - mmdet - INFO - Epoch [1][200/10084]	lr: 8.373e-04, eta: 1 day, 7:22:47, time: 0.246, data_time: 0.003, memory: 3482, loss_cls: 0.8008, loss_bbox: 1.4836, loss_dir: 0.1374, loss: 2.4218, grad_norm: 24.7188
2022-07-27 10:20:02,640 - mmdet - INFO - Epoch [1][250/10084]	lr: 9.723e-04, eta: 1 day, 6:38:03, time: 0.247, data_time: 0.003, memory: 3482, loss_cls: 0.7371, loss_bbox: 1.5554, loss_dir: 0.1419, loss: 2.4344, grad_norm: 21.9774
2022-07-27 10:20:14,775 - mmdet - INFO - Epoch [1][300/10084]	lr: 1.107e-03, eta: 1 day, 6:03:15, time: 0.243, data_time: 0.002, memory: 3482, loss_cls: 0.6303, loss_bbox: 1.4387, loss_dir: 0.1448, loss: 2.2138, grad_norm: 21.0055
/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py:31: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
  return clip_grad.clip_grad_norm_(params, **self.grad_clip)
2022-07-27 10:20:27,419 - mmdet - INFO - Epoch [1][350/10084]	lr: 1.242e-03, eta: 1 day, 5:48:06, time: 0.253, data_time: 0.003, memory: 3503, loss_cls: nan, loss_bbox: nan, loss_dir: nan, loss: nan, grad_norm: nan
Traceback (most recent call last):
  File "tools/train.py", line 225, in <module>
    main()
  File "tools/train.py", line 221, in main
    meta=meta)
  File "/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/apis/train.py", line 35, in train_model
    meta=meta)
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
    self.call_hook('after_train_iter')
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
    getattr(hook, fn_name)(self)
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py", line 35, in after_train_iter
    runner.outputs['loss'].backward()
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/autograd/__init__.py", line 149, in backward
    allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
  File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/autograd/function.py", line 87, in apply
    return self._forward_cls.backward(self, *args)  # type: ignore[attr-defined]
  File "/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 46, in backward
    voxel_points_count, ctx.reduce_type)
RuntimeError: CUDA error: an illegal memory access was encountered

关于在纯雷达标注中存在的标注缺失问题

我注意到一个问题,在您的官方网站中所提供的数据集里,激光雷达的标注是缺失的,我猜想您是按照相机视角的遮挡来进行标注从而导致标注缺失,您可以查看下_000917.json_并将其bondingbox投影到激光雷达坐标系,就会发现在近处的一些明显的车辆没有标注,这对于纯激光雷达检测是不允许的,我也很感谢您的开源工作,但这真的很困扰,最后再提一句投影不准确问题。

For vis_label_in_image.py

There are some problem in vis V2X-V(tools/visualize/vis_label_in_image.py line 25), in this dataset, you should use
label_path = osp.join(path, data_info["label_camera_std_path"])
to replace
label_path = osp.join(path, data_info["label_lidar_std_path"])
to get right image!
@haibao-yu

Error occured when reimplementing pointpillar on vic3d dataset

Hi, I have followed the configs/vic3d/late-fusion-pointcloud/pointpillars/README.md to prepare the dataset, but when I try to train PointPillar model with trainval_config_i.py, the error occurred as
No such file or directory: '../../../../data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side//kitti_infos_train.pkl.

I have checked the converted dataset, which indeed doesn't contain kitti_infos_train.pkl. So how can I generate this file, thank you!

tools/dataset_converter/point_cloud_i2v.py running too slow

Hi!

When I tried to Convert point cloud data from infrastructure coordinate system to vehicle coordinate system according to this guidance, I found that this code ran too slow, which took about 20 hours.

After optimization through vectorization and multi-processing, the whole conversion process only takes about 20 minutes (I used 16 processes here).

The modified code is as follows, hope this code is correct and can help.

import os
import json
import argparse
import numpy as np
from pypcd import pypcd
import open3d as o3d
from tqdm import tqdm
import errno

from concurrent import futures as futures


def read_json(path_json):
    with open(path_json, "r") as load_f:
        my_json = json.load(load_f)
    return my_json


def mkdir_p(path):
    try:
        os.makedirs(path)
    except OSError as exc:  # Python >2.5
        if exc.errno == errno.EEXIST and os.path.isdir(path):
            pass
        else:
            raise


def get_virtuallidar2world(path_virtuallidar2world):
    virtuallidar2world = read_json(path_virtuallidar2world)
    rotation = virtuallidar2world["rotation"]
    translation = virtuallidar2world["translation"]
    delta_x = virtuallidar2world["relative_error"]["delta_x"]
    delta_y = virtuallidar2world["relative_error"]["delta_y"]
    return rotation, translation, delta_x, delta_y


def get_novatel2world(path_novatel2world):
    novatel2world = read_json(path_novatel2world)
    rotation = novatel2world["rotation"]
    translation = novatel2world["translation"]
    return rotation, translation


def get_lidar2novatel(path_lidar2novatel):
    lidar2novatel = read_json(path_lidar2novatel)
    rotation = lidar2novatel["transform"]["rotation"]
    translation = lidar2novatel["transform"]["translation"]
    return rotation, translation


def get_data(data_info, path_pcd):
    for data in data_info:
        name1 = os.path.split(path_pcd)[-1]
        name2 = os.path.split(data["pointcloud_path"])[-1]
        if name1 == name2:
            return data


def trans(input_point, translation, rotation):
    input_point = np.array(input_point).reshape(3, -1)
    translation = np.array(translation).reshape(3, 1)
    rotation = np.array(rotation).reshape(3, 3)
    output_point = np.dot(rotation, input_point).reshape(3, -1) + np.array(translation).reshape(3, 1)
    return output_point


def rev_matrix(R):
    R = np.matrix(R)
    rev_R = R.I
    rev_R = np.array(rev_R)
    return rev_R


def trans_point_i2v(input_point, path_virtuallidar2world, path_novatel2world, path_lidar2novatel):
    # print('0:', input_point)

    # virtuallidar to world
    rotation, translation, delta_x, delta_y = get_virtuallidar2world(path_virtuallidar2world)
    point = trans(input_point, translation, rotation) + np.array([delta_x, delta_y, 0]).reshape(3, 1)
    """
    print('rotation, translation, delta_x, delta_y', rotation, translation, delta_x, delta_y)
    print('1:', point)
    """

    # world to novatel
    rotation, translation = get_novatel2world(path_novatel2world)
    new_rotation = rev_matrix(rotation)
    new_translation = -np.dot(new_rotation, translation)
    point = trans(point, new_translation, new_rotation)
    """
    print('rotation, translation:', rotation, translation)
    print('new_translation, new_rotation:', new_translation, new_rotation)
    print('2:', point)
    """

    # novatel to lidar
    rotation, translation = get_lidar2novatel(path_lidar2novatel)
    new_rotation = rev_matrix(rotation)
    new_translation = -np.dot(new_rotation, translation)
    point = trans(point, new_translation, new_rotation)
    """
    print('rotation, translation:', rotation, translation)
    print('new_translation, new_rotation:', new_translation, new_rotation)
    print('3:', point)
    """
    point = point.T

    return point


def read_pcd(path_pcd):
    pointpillar = o3d.io.read_point_cloud(path_pcd)
    points = np.asarray(pointpillar.points)
    return points


def show_pcd(path_pcd):
    pcd = read_pcd(path_pcd)
    o3d.visualization.draw_geometries([pcd])


def write_pcd(path_pcd, new_points, path_save):
    pc = pypcd.PointCloud.from_path(path_pcd)
    pc.pc_data["x"] = new_points[:, 0]
    pc.pc_data["y"] = new_points[:, 1]
    pc.pc_data["z"] = new_points[:, 2]
    pc.save_pcd(path_save, compression="binary_compressed")


def trans_pcd_i2v(path_pcd, path_virtuallidar2world, path_novatel2world, path_lidar2novatel, path_save):
    # (n, 3)
    points = read_pcd(path_pcd)
    # (n, 3)
    new_points = trans_point_i2v(points.T, path_virtuallidar2world, path_novatel2world, path_lidar2novatel)
    write_pcd(path_pcd, new_points, path_save)

    
def map_func(data, path_c, path_dest, i_data_info, v_data_info):
    path_pcd_i = os.path.join(path_c, data["infrastructure_pointcloud_path"])
    path_pcd_v = os.path.join(path_c, data["vehicle_pointcloud_path"])
    i_data = get_data(i_data_info, path_pcd_i)
    v_data = get_data(v_data_info, path_pcd_v)
    path_virtuallidar2world = os.path.join(
        path_c, "infrastructure-side", i_data["calib_virtuallidar_to_world_path"]
    )
    path_novatel2world = os.path.join(path_c, "vehicle-side", v_data["calib_novatel_to_world_path"])
    path_lidar2novatel = os.path.join(path_c, "vehicle-side", v_data["calib_lidar_to_novatel_path"])
    name = os.path.split(path_pcd_i)[-1]
    path_save = os.path.join(path_dest, name)
    trans_pcd_i2v(path_pcd_i, path_virtuallidar2world, path_novatel2world, path_lidar2novatel, path_save)
    
    
def get_i2v(path_c, path_dest, num_worker):
    mkdir_p(path_dest)
    path_c_data_info = os.path.join(path_c, "cooperative/data_info.json")
    path_i_data_info = os.path.join(path_c, "infrastructure-side/data_info.json")
    path_v_data_info = os.path.join(path_c, "vehicle-side/data_info.json")
    c_data_info = read_json(path_c_data_info)
    i_data_info = read_json(path_i_data_info)
    v_data_info = read_json(path_v_data_info)
    
    total = len(c_data_info)
    with tqdm(total=total) as pbar:
        with futures.ProcessPoolExecutor(num_worker) as executor:
            res = [executor.submit(map_func, data, path_c, path_dest, i_data_info, v_data_info) for data in c_data_info]
            for _ in futures.as_completed(res):
                pbar.update(1)

                
parser = argparse.ArgumentParser("Convert The Point Cloud from Infrastructure to Ego-vehicle")
parser.add_argument(
    "--source-root",
    type=str,
    default="./data/DAIR-V2X/cooperative-vehicle-infrastructure",
    help="Raw data root about DAIR-V2X-C.",
)
parser.add_argument(
    "--target-root",
    type=str,
    default="./data/DAIR-V2X/cooperative-vehicle-infrastructure/vic3d-early-fusion/velodyne/lidar_i2v",
    help="The data root where the data with ego-vehicle coordinate is generated",
)
parser.add_argument(
    "--num-worker",
    type=int,
    default=1,
    help="Number of workers for multi-processing",
)

if __name__ == "__main__":
    args = parser.parse_args()
    source_root = args.source_root
    target_root = args.target_root
    num_worker = args.num_worker

    get_i2v(source_root, target_root, num_worker)

ImvoxelNet cannot be visualized

Do you have a visualization solution? @haibao-yu

mmdet3d/apis/test.py", line 52, in single_gpu_test
    if batch_size == 1 and isinstance(data['img'][0],
TypeError: 'DataContainer' object is not subscriptable

ImvoxelNet相关

请问ImvoxelNet训练车路协同数据集是直接使用原始模型网络还是有些更改,请问有没有相关介绍和说明?

How AB is calculated?

hi, i want to konw how th AB is calculate?
the code is as follows:

def send(self, key, val):
    self.data[key] = val
    if isinstance(val, np.ndarray):
        cur_bytes = val.size * 8
    elif type(val) in [int, float]:
        cur_bytes = 8
    elif isinstance(val, list):
        cur_bytes = np.array(val).size * 8
    elif type(val) is str:
        cur_bytes = len(val)
    if key.endswith("boxes"):
        cur_bytes = cur_bytes * 7 / 24
    self.cur_bytes += cur_bytes

i want to konw which is used to calculate the point cloud or box?

for pointcloud, each point cloud has 4 float value, each float takes up 4 bytes. so the AB of pointcloud is N*4*4 (N is the numbers of point clous in each file).

for box,each box has 8 vertices, each vertice has 3 coordinates. if it is float too, the AB of pointcloud is M*8*3*4(M is the number of boxes in each file).

Am I doing the right thing?, Or can you tell me about the logic of the source code?

Thank you!

Question about reproducing results using provided config file.

Hi, thanks for your inspiring work!
Following vic3d/late-fusion-pointcloud/pointpillars/README.md, the reproduced results using trainval_config_v.py and trainval_config_v.py are not as good as those using your provided checkpoints (inf-model & veh-model).

the reproduced results

car 3d IoU threshold 0.30, Average Precision = 60.87​
car 3d IoU threshold 0.50, Average Precision = 49.24​
car 3d IoU threshold 0.70, Average Precision = 29.07​
car bev IoU threshold 0.30, Average Precision = 64.03​
car bev IoU threshold 0.50, Average Precision = 54.57​
car bev IoU threshold 0.70, Average Precision = 44.08​
Average Communication Cost = 927.07 Bytes

results from provided checkpoints

car 3d IoU threshold 0.30, Average Precision = 63.40
car 3d IoU threshold 0.50, Average Precision = 53.36
car 3d IoU threshold 0.70, Average Precision = 37.28
car bev IoU threshold 0.30, Average Precision = 65.26
car bev IoU threshold 0.50, Average Precision = 59.16
car bev IoU threshold 0.70, Average Precision = 50.53
Average Communication Cost = 897.99 Bytes

How to reproduce the results from your provided checkpoints?
Any advice would be greatly appreciated!

Exact version of environment configuratioin

Thanks for your great work!

Besides mmdet3d==0.17.1, I would like to know the exact environment configuration information, including the versions of cuda, pytorch, mmcv, mmdetection and mmseg. I want to reproduce your experimental results more accurately.

Question about creating Kitti-format data

When I activate the mmdetection3d virtual environment, and run the following code:

python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side \
    --target-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side \
    --split-path ./data/split_datas/cooperative-split-data.json \
    --label-type lidar --sensor-view infrastructure --no-classmerge

python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side \
    --target-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side \
    --split-path ./data/split_datas/cooperative-split-data.json \
    --label-type lidar --sensor-view vehicle --no-classmerge

All of the output is shown below:

================ Start to Convert ================
================ Start to Copy Raw Data ================

Other than copying the dataset, other operations seem to be terminated. May I know what is the reason?

Fusion after multimodal object detection

Hello, thank you for your work in the vehicle-road collaboration. Here, can we realize that both the car end and the road end use a multi-modal target detection algorithm to achieve post-fusion? Is there any related documentation? Looking forward to your reply, thank you!

late fusion

您好 我想请问一下 代码中的纯图像的late fusion的部分,fusion体现在了哪里呢,我翻看代码中有关后融合的部分好像只能找到车端路段各自的检测与评估,关于二者结果级融合的过程 请问是体现在代码的哪里呢 感谢

Building wheel for mmdet3d (setup.py) ... error

Thanks for your great work!
I configured mmdet3d==0.17.1, but I got an error.

Reproduces the problem - error message

Building wheels for collected packages: mmdet3d
Building wheel for mmdet3d (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [667 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/mmdet3d
copying mmdet3d/version.py -> build/lib.linux-x86_64-3.7/mmdet3d
copying mmdet3d/init.py -> build/lib.linux-x86_64-3.7/mmdet3d
creating build/lib.linux-x86_64-3.7/mmdet3d/core
......
......
warning: no files found matching '.cpp' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.cu' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.h' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.cc' under directory 'mmdet3d/.mim/ops'
......
......
creating /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src
Emitting ninja build file /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc:15:10: fatal error: spconv/geometry.h: No such file or directory
#include <spconv/geometry.h>
^~~~~~~~~~~~~~~~~~~
compilation terminated.
[2/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc:15:10: fatal error: spconv/reordering.h: No such file or directory
#include <spconv/reordering.h>
^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[3/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc:15:10: fatal error: spconv/maxpool.h: No such file or directory
#include <spconv/maxpool.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
[4/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc:16:10: fatal error: spconv/fused_spconv_ops.h: No such file or directory
#include <spconv/fused_spconv_ops.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[5/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS_ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu:16:10: fatal error: spconv/maxpool.h: No such file or directory
#include <spconv/maxpool.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
[6/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu:16:10: fatal error: spconv/mp_helper.h: No such file or directory
#include <spconv/mp_helper.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
[7/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu:16:10: fatal error: spconv/indice.cu.h: No such file or directory
#include <spconv/indice.cu.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1672, in _run_ninja_build
env=env)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "", line 36, in
File "", line 34, in
File "/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/setup.py", line 312, in
zip_safe=False)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 709, in build_extensions
build_ext.build_extensions(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
depends=ext.depends)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 539, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1360, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1682, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
hint: See above for output from the failure.

Thank you very much for your help!

How the image is resized to 1920 by 1080? LiDAR on image alignment

Thank you for responding to issues recently.
I wonder how the image is resized to have the size 1920 by 1080 when the camera sensor is 4096x2160. Also, does this process affect the alignment of LiDAR on an image? For example, for a stationary object below, why the signs are not aligned correctly using the provided intrinsic matrix? This object is stationary, so there shouldn't be due to a time difference between LiDAR capture time and image capture time.
image

Table 5 in your paper, the vehicle type

Hello, thank you for your outstanding work, but could you please answer my question? Has "Vehicle", "Pedestrian", and "Cyclist" been redefined? like behind, because I try the dair-v2x-i detection in openpcdet with pointpillar, but got different results.
image
image
my results of the dair-v2x-i detection in openpcdet with pointpillar:
image

模型权重和数据集相关问题

请问数据集得challenge大概什么时候会出现?
请问ImvoxelNet关于车路协同数据集得.pth权重文件大概什么时候会出现?

environment mmdetection3d

Hi, thanks for your excellent work!
I want to know how to control mmdetection3D==0.17.1, the tutorial on mmdetection3d didn't mention that.

Questions about data preprocessing

Thanks for your contributions about this cooperative 3D object detection dataset. I wonder how to process the original dataset so that it is suitable for training pipeline of MMDection3D.

The velocities in TCLF framework

Hi, according to the readme file in "configs/vic3d/late-fusion-pointcloud/pointpillars/", the perfromance is directly evaluated after obtaining two model trained by infrastructure-side and vehicle-side respectively.
So, when does the "MLP to predict their velocities" in Figure 3 been trained?

Looking forward to your reply. Thank you.

公布的数据集不全面与官网统计的数据集数量差别大的问题

您好,官网表示数据集总计71254帧图像数据和71254帧点云数据;DAIR-V2X协同数据集(DAIR-V2X-C),包含38845帧图像数据和38845帧点云数据;DAIR-V2X路端数据集(DAIR-V2X-I),包含10084帧图像数据和10084帧点云数据;DAIR-V2X车端数据集(DAIR-V2X-V),包含22325帧图像数据和22325帧点云数据 。可是我在官网下载的DAIR-V2X-V只有15627帧点云和图像,DAIR-V2X-I只有7058帧点云和图像,请问下载的仅仅只是训练集吗,还是说后续数据集有更新还没有公布?
图片
图片
图片

changed 000010.json into 000010.txt

Hello, this is the result when I changed 000010.json into 000010.txt when I reintroduced early-fusion. I compared the two data and found that they are all the same except for x,y and z, which are quite different. Is this normal?

{"type": "car", "occluded_state": 1, "truncated_state": 0, "alpha": -1.4830296047472067, "2d_box": {"xmin": 941.608582, "ymin": 564.241882, "xmax": 961.3516239999999, "ymax": 579.840454}, "3d_dimensions": {"h": 1.511236, "w": 1.775471, "l": 4.389911}, "3d_location": {"x": 92.3319499999998, "y": 9.902099000000167, "z": -0.819973200000002}, "rotation": 0.019190620000032486, "world_8_points": [[94.50946618443342, 10.831791008931262, -1.5755911999999999], [94.5435364823912, 9.056646934034948, -1.5755911999999959], [90.1544338155663, 8.972406991068505, -1.5755912000000005], [90.12036351760852, 10.747551065964819, -1.5755912000000045], [94.50946618443376, 10.831791008930919, -0.06435519999999673], [94.54353648239108, 9.056646934035058, -0.06435519999999925], [90.15443381556663, 8.972406991068617, -0.06435520000000103], [90.12036351760862, 10.747551065964931, -0.06435520000000294]]}.

Car 0 1-1.4830296047472367 941.608582 564.241882 961.3516239999999 579.840454 1.511236 4.389911 1.775471 -4.954665545984375 -0.12774647902019565 92.9216698325848 -0.019190620000032486

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.