Giter Club home page Giter Club logo

dataset-api's Introduction

Welcome to Apolloscape's GitHub page!

Apollo is a high performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles. ApolloScape, part of the Apollo project for autonomous driving, is a research-oriented dataset and toolkit to foster innovations in all aspects of autonomous driving, from perception, navigation, control, to simulation.

Table of Contents

  1. Introduction
  2. Data Download
  3. Citation

Introduction

This is a repo of toolkit for ApolloScape Dataset, CVPR 2019 Workshop on Autonomous Driving Challenge and ECCV 2018 challenge. It includes Trajectory Prediction, 3D Lidar Object Detection and Tracking, Scene Parsing, Lane Segmentation, Self Localization, 3D Car Instance, Stereo, and Inpainting Dataset. Some example videos and images are shown below:

Video Inpainting:

Depth Guided Video Inpainting for Autonomous Driving

Trajectory Prediction:

3D Lidar Object Detection and Tracking:

Stereo estimation:

Lanemark segmentation:

Online self-localization:

3D car instance understanding:

Scene Parsing

demo

Data Download

Full download links are in each folder.

wget https://ad-apolloscape.cdn.bcebos.com/road01_ins.tar.gz 
or
wget https://ad-apolloscape.bj.bcebos.com/road01_ins.tar.gz

wget https://ad-apolloscape.cdn.bcebos.com/trajectory/prediction_train.zip

Run

pip install -r requirements.txt
source source.rc

to include necessary packages and current path in to PYTHONPATH to use several util functions.

Please goto each subfolder for detailed information about the data structure, evaluation criterias and some demo code to visualize the dataset.

Citation

DVI: Depth Guided Video Inpainting for Autonomous Driving.

Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang. ECCV 2020. PDF, Webpage, Inpainting Dataset, Video, Presentation

@inproceedings{liao2020dvi,
  title={DVI: Depth Guided Video Inpainting for Autonomous Driving},
  author={Liao, Miao and Lu, Feixiang and Zhou, Dingfu and Zhang, Sibo and Li, Wei and Yang, Ruigang},
  booktitle={European Conference on Computer Vision},
  pages={1--17},
  year={2020},
  organization={Springer}
}

TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents. PDF, Webpage, Trajectory Dataset, 3D Perception Dataset, Video

Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, and Dinesh Manocha. AAAI(oral), 2019

@inproceedings{ma2019trafficpredict,
  title={Trafficpredict: Trajectory prediction for heterogeneous traffic-agents},
  author={Ma, Yuexin and Zhu, Xinge and Zhang, Sibo and Yang, Ruigang and Wang, Wenping and Manocha, Dinesh},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={33},
  pages={6120--6127},
  year={2019}
}

The apolloscape open dataset for autonomous driving and its application. PDF

Huang, Xinyu and Wang, Peng and Cheng, Xinjing and Zhou, Dingfu and Geng, Qichuan and Yang, Ruigang

@article{wang2019apolloscape,
  title={The apolloscape open dataset for autonomous driving and its application},
  author={Wang, Peng and Huang, Xinyu and Cheng, Xinjing and Zhou, Dingfu and Geng, Qichuan and Yang, Ruigang},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2019},
  publisher={IEEE}
}

CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception. PDF, Website

@article{zhang2020cvpr,
  title={CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception},
  author={Zhang, Sibo and Ma, Yuexin and Yang, Ruigang},
  journal={arXiv preprint arXiv:2004.05966},
  year={2020}
}

dataset-api's People

Contributors

apolloscapeauto avatar pengwangucla avatar sibozhang avatar texify[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dataset-api's Issues

confused with the label list

labels = [
# name id trainId category catId hasInstances ignoreInEval color
Label( 'void' , 0 , 0, 'void' , 0 , False , False , ( 0, 0, 0) ),
Label( 's_w_d' , 200 , 1 , 'dividing' , 1 , False , False , ( 70, 130, 180) ),
Label( 's_y_d' , 204 , 2 , 'dividing' , 1 , False , False , (220, 20, 60) ),
Label( 'ds_w_dn' , 213 , 3 , 'dividing' , 1 , False , True , (128, 0, 128) ),
Label( 'ds_y_dn' , 209 , 4 , 'dividing' , 1 , False , False , (255, 0, 0) ),
Label( 'sb_w_do' , 206 , 5 , 'dividing' , 1 , False , True , ( 0, 0, 60) ),
Label( 'sb_y_do' , 207 , 6 , 'dividing' , 1 , False , True , ( 0, 60, 100) ),
Label( 'b_w_g' , 201 , 7 , 'guiding' , 2 , False , False , ( 0, 0, 142) ),
Label( 'b_y_g' , 203 , 8 , 'guiding' , 2 , False , False , (119, 11, 32) ),
Label( 'db_w_g' , 211 , 9 , 'guiding' , 2 , False , True , (244, 35, 232) ),
Label( 'db_y_g' , 208 , 10 , 'guiding' , 2 , False , True , ( 0, 0, 160) ),
Label( 'db_w_s' , 216 , 11 , 'stopping' , 3 , False , True , (153, 153, 153) ),
Label( 's_w_s' , 217 , 12 , 'stopping' , 3 , False , False , (220, 220, 0) ),
Label( 'ds_w_s' , 215 , 13 , 'stopping' , 3 , False , True , (250, 170, 30) ),
Label( 's_w_c' , 218 , 14 , 'chevron' , 4 , False , True , (102, 102, 156) ),
Label( 's_y_c' , 219 , 15 , 'chevron' , 4 , False , True , (128, 0, 0) ),
Label( 's_w_p' , 210 , 16 , 'parking' , 5 , False , False , (128, 64, 128) ),
Label( 's_n_p' , 232 , 17 , 'parking' , 5 , False , True , (238, 232, 170) ),
Label( 'c_wy_z' , 214 , 18 , 'zebra' , 6 , False , False , (190, 153, 153) ),
Label( 'a_w_u' , 202 , 19 , 'thru/turn' , 7 , False , True , ( 0, 0, 230) ),
Label( 'a_w_t' , 220 , 20 , 'thru/turn' , 7 , False , False , (128, 128, 0) ),
Label( 'a_w_tl' , 221 , 21 , 'thru/turn' , 7 , False , False , (128, 78, 160) ),
Label( 'a_w_tr' , 222 , 22 , 'thru/turn' , 7 , False , False , (150, 100, 100) ),
Label( 'a_w_tlr' , 231 , 23 , 'thru/turn' , 7 , False , True , (255, 165, 0) ),
Label( 'a_w_l' , 224 , 24 , 'thru/turn' , 7 , False , False , (180, 165, 180) ),
Label( 'a_w_r' , 225 , 25 , 'thru/turn' , 7 , False , False , (107, 142, 35) ),
Label( 'a_w_lr' , 226 , 26 , 'thru/turn' , 7 , False , False , (201, 255, 229) ),
Label( 'a_n_lu' , 230 , 27 , 'thru/turn' , 7 , False , True , (0, 191, 255) ),
Label( 'a_w_tu' , 228 , 28 , 'thru/turn' , 7 , False , True , ( 51, 255, 51) ),
Label( 'a_w_m' , 229 , 29 , 'thru/turn' , 7 , False , True , (250, 128, 114) ),
Label( 'a_y_t' , 233 , 30 , 'thru/turn' , 7 , False , True , (127, 255, 0) ),
Label( 'b_n_sr' , 205 , 31 , 'reduction' , 8 , False , False , (255, 128, 0) ),
Label( 'd_wy_za' , 212 , 32 , 'attention' , 9 , False , True , ( 0, 255, 255) ),
Label( 'r_wy_np' , 227 , 33 , 'no parking' , 10 , False , False , (178, 132, 190) ),
Label( 'vom_wy_n' , 223 , 34 , 'others' , 11 , False , True , (128, 128, 64) ),
Label( 'om_n_n' , 250 , 35 , 'others' , 11 , False , False , (102, 0, 204) ),
Label( 'noise' , 249 , 255 , 'ignored' , 255 , False , True , ( 0, 153, 153) ),
Label( 'ignored' , 255 , 255 , 'ignored' , 255 , False , True , (255, 255, 255) ),
]

Firstly, thanks for your dataset, I have a question:
unlike the label list of cityscape, the label list of Apollo lane dataset is different. In my opinion, the trainId of 'void' should be 255(ignored), because it shouldn't be evaluated. Is that right?

Inconsistent motorcycle, bicycle and rider labelling

Hello, I am exploring the scene parsing instance segmentation dataset and found some weird labels.
image
image
image

Many cyclists are labeled as a motorcycle. Is this normal or a mistake?
From the documentation, should the person who's riding on a bike be labeled as a rider?

Unable to Download Detection/Tracking Dataset

I was interested in downloading the detection/tracking pcd zip files, but whenever I click on the download buttons on the website, nothing happens. I am able to download the datasets from all other sections but detection/tracking, and I have signed in with the user license. I was hoping for some help in downloading the detection/tracking dataset.

Parsing lidar pcd

Hi,

I am trying to parse lidar data provided in pcd format using pcl library. It reads the data x,y,z but intensity is read as 0 for all rows.

Is there a better way to read lidar pcd files ?

Cannot download dataset from web page links

Maybe this is not the proper place for this issue. Anyway, I cannot use the dataset links of your web page for 3D Car Instance (or any other dataset).

I think it is a CORS issue, since my JS console shows:

Failed to load http://www.baidu.com/search/error.html?tc=3192971…: Response for preflight is invalid http://apolloscape.auto/car_instance.html (redirect)

image rectification

Hi,
I am using the data_test.py script you provided in utils to rectify the images of the data split for scene parsing, because I want to train an unsupervised depth estimation model. I found there is a few pixel misalignment when trying to put side by side the two images (see images at the bottom).

I have a few questions:

  • is it correct to use this code and its parameters for the scene parsing data split?
  • can you suggest me how to adapt the script to rectify correctly the images?

Thank you.

The images I used (randomly picked from the dataset) are road02_ins/ColorImage/Record001/Camera 5/170927_063819921_Camera_5.jpg and road02_ins/ColorImage/Record001/Camera 6/170927_063819921_Camera_6.jpg This is zoom in the area where I found the problem:
camera5_over_camera6
This are the two rectified images:
camera_6
camera_5

Problem of generating depth image for self-localization

I got some problems to generate the depth image using proj_point_cloud.py.

First, the problem is about GLEW:

Failed to initialize GLEW
Missing GL version.

Segmentation fault (core dumped).

Then, I fix the link problem of GLEW, but the shader cannot be compiled.
image

The depth result is not correct as well, all the values are 299.7050.
image

How can I get the depth image?
Thank you!

No right images.

Hi,
Thank your great effort for this good dataset. I downloaded the 3D car instance dataset, but only found the left image(camera 5). There suppose to be stereo images?

Inconsistent Lane Segmentation Labels between Training data and Sample data

Take image /road03/Record001/171206_025755592_Camera_5.jpg for example.

image

Left Image is from Training data label set. Mid Image is the original one. Right Image comes from the Sample data (lane_marking_examples.tar.gz).

The color of lanes are quite different... also the some features are also missing in the Sample data set. Does this mean the label data in Sample data set are just for demonstrations? And can not be used for training tasks.

Thanks for your reply!

轨迹预测提交结果 failed

你好,

我在轨迹预测模块提交结果时,一直显示failed;
并且,用dataset-api/trajectory_prediction/evaluation.py测试时(用的github提供的gt和res),报错如下:
File "evaluation.py", line 42, in evaluation
AttributeError: 'map' object has no attribute 'count'

请问你有遇到过这个问题吗?

Disparity Map Artifacts

Hi,
I found that there are artifacts in the provided disparity maps. The dataset is not useable unless the artifacts are removed. Here I provided a sample:
image
image

Car Instance 2D bounding box

Hi.
I wanna transfer learning with car instance in ApolloScape data. But ApolloScape data hasn't 2D bounding box coordinates.

So, how do I convert Euler angles to 2D bounding box coordinates (minx, miny, maxx, maxy) ?

sensor setup

Can you share all the sensors setup? Like KITTI.
image

Can't get attribute 'CHJ_tiny_obj' on <module 'objloader'

When running car_instance/demo.ipynb, I met the following error. Is there any clue to fix this? Many thanks!

----> 8 visualizer.load_car_models()
9 image_vis, mask, depth = visualizer.showAnn(setting.image_name)

~/Workspaces/apollocar3d/dataset-api/car_instance/render_car_instances.py in load_car_models(self)
62 print(car_model)
63 with open(car_model, 'rb') as f:
---> 64 self.car_models[model.name] = pkl.load(f)
65 # fix the inconsistency between obj and pkl
66 self.car_models[model.name]['vertices'][:, [0, 1]] *= -1

AttributeError: Can't get attribute 'CHJ_tiny_obj' on <module 'objloader' from '/home/ark/Workspaces/apollocar3d/dataset-api/car_instance/objloader.py'>

Vehicles are not rendered correctly

When using the provided scripts the pose information seems incorrect. In both sample_data and training some cars are flipped (see screenshots).

Any ideas?
rendering_example_data
rendering_171206_034636094_camera_5

Keypoint annotations

I cannot seem to find keypoint annotations for cars in the dataset. In the paper, you seem to have a 66 keypoint model per car. Is there something I'm missing here, or have the keypoints not been released?

Scene Scape label error

你好,在使用Apollo 公开数据 Scene Scape 的时候,发现 instance-level labels 有错误:
motorbicycle_group 的 Class ID 和 rider 的 Class ID 混乱了,例如

road01_ins/ColorImage/Record012/Camera 5/170908_061633100_Camera_5.jpg,其中一个目标应为 rider(Class ID = 37),
但 json label 中的 Class ID 是 162

sample

The image names of datasets

I am trying to utilize the disparity from Stereo dataset to the task of 3D car instance understanding task.
And could you provide me with some information about how do you convert the name of images in Stereo dataset to the name of images in Car Instance dataset?

For example, ID_0e6f0cc36.jpg in Car Instance dataset and 171206_034636094_Camera_5.jpg in Stereo dataset share the same content while the first one has been cropped and rotated a little bit.

If possible could you also provide some hints about how do you clip the images?

Best regards,
zshyang

Where can I get the data structure of sample trajectory?

Trajectory dataset:
I want to render a trajectory in a 2d rgb image. I found there is no rgb image for predcition train/test data, while the sample data has trajectories and rgb images. But the data format of sample trajectory is different with prediction train. How can I parse sample trajectory, and render it with its rgb image?

Trajectory dataset

Hi,
Thanks for your great work. When do you intend to release the trajectory dataset as described here?
Thanks in advance.

Inconsistent archive structure

The provided datasets are not consistent with the code:

  • no "sample_data" subfolder exists
  • car_models lies within same folder as images which creates conflicts

I would recommend the following structure (with all 3 downloads)

  • apolloscape
    • 3d_car_instance
      • car_models
      • sample_data
        • car_poses
        • images
      • test
        • car_poses
        • images
      • train
        • car_poses
        • images

The number of training images for lane segmentation is inconsistent with description?

Thanks for your great dataset!

As the description PDF, there are 132189 training images for lane segmentation. However, there are only 113653 training images in all three folders "ColorImage_road02" "ColorImage_road03" and "ColorImage_road04" and so do the corresponding labels.

Actually, I am interested in the statistic information of the dataset, which are shown in the PDF-format description, but it cannot be used either since the total number of images are different.

Hope for your reply. Thanks again.

What is the geospatial coordinate system used for postion_x and postion_y?

Hi, I'm playing with the trajectory dataset, and would like to ask what is the CRS used to encode the position x and y. I thought it may be EPSG:4326 (aka. Lon-Lat) but many of the values for "position_y" column is beyond the range of (-90, 90). For example, in file prediction_train/result_9048_1_frame.txt,
image
image

Can you clarify what is the exact CRS the dataset is using?
Thanks!

车道线线上提交结果,一直报错

image

测试集是用的官网上的这个:http://apolloscape.auto/lane_segmentation.html 中的 Test_ColorImage_road05.tar.gz ,总共885张图片,并且是按照这个压缩包的目录结构做的提交结构:
├── test
│ ├── ColorImage_road05
│ │ ├── 171206_064731347_Camera_5.png
│ │ ├── ...

能否提供给我测试集的完整的csv文件目录呀,或者分数很低的一个提交样例,很着急,麻烦啦~

I cannot find 'car_models' in 'car_instance' dataset

Hi

First of all, thank you for excellent dataset!

In your readme there is a mention that 'car_instance' dataset contains 'car_model' directory with car meshes. I have download '3d-car-understanding-train.tar.gz' from here, however there are no such directory :(
image

Where may I find the car models?

Thank you!

dataset-api and self-localization

Is the current version of the dataset-api supposed to work out of the box with the self-localization dataset?

I have some questions regarding the dataset. I'm trying to generate depth images from the point clouds.

  1. The point clouds are provided in multiple parts. For example in Road11:
    Point cloud folder contents: Part001...Part008
    Image folder contents: Record001...Record037
    How are these related? Do you provide code for handling the Part00x.pcd files?

  2. I'm trying to use proj_point_cloud.py but it appears as if the file structure required by this script differs from the dataset provided as there is no train/test folder and it assumes a pc_sub.pcd file at the leaf of the point cloud folder structure.

  3. Is there a way to generate depth images without using the renderer provided. It seems as though there are multiple driver related instabilities to getting GLEW to comfortably run on a machine and moving back nvidia driver versions isn't an option since I'm using a team machine.

Thanks in advance, and I look forward to working with this dataset. It looks like your team has put tremendous effort into putting this dataset together and making it available to the public. Your work is much appreciated.

Best regards,
S

self_localization problem

您好,非常感谢百度公开的数据集。我最近在学习SLAM相关技术,贵公司在github上是写了分享self_localization部分的点云数据。如页面https://github.com/ApolloScapeAuto/dataset-api/tree/master/self_localization#dataset-structure所示 (At download page, we also provide backgroud point cloud for supporting the localization task for each road. You may download the point cloud to point_cloud under each road directory {scene_names} as described in above data structure.) 但是在给的下载链接http://apolloscape.auto/self_localization.html网页中未看到点云的下载链接,您能分享一下定位中图像对应的点云图和深度图的链接么?

Intersection with other datasets

Hi,
The instance split contains only mono camera, we would like to investigate methods which use more information.

  1. Can you share the lidar point-cloud and stereo images on the Instance set?
    Can you share at least the intersection of 'car instance' with other set that has lidar.

  2. I found that apollo instance has big overlap with the stereo set. It is not clear how to map the images between sets?

For example, these two images are from the 'car instance' and stereo sets.
image

The left is from the car instance and the right is from the stereo.
image

the transformation between them is not just crop as can be seen in the zoomed in area.
image

I tried to rectify the instance image but it didn't help. What is the transformation between those two sets?
Best,
Loli

Inaccurate 3d pose labels

Hi,

I visualized the instances via my own rendering scripts (using render_car_instances.py as a reference). 3d models are generally aligned with the images but I noticed the following problem after visualizing the masks.

Obviously there're wrong occlusions and intersections. I thought it was wrong depth buffer at first but didn't find any problems in the rendering pipeline. Probably it indicates wrong pose labels.

After a rough count, I found 32 out of 100 images with such problems, which is too many I think. Mostly it's the distant cars.

180116_060356764_Camera_5
180116_060356764_Camera_5
180116_055652297_Camera_5
180116_055652297_Camera_5
171206_041949499_Camera_5
171206_041949499_Camera_5

PS: I processed the models to close the holes for proper mask rendering.

release 3d keypoints locations for each car model

Could you please release the 3d keypoints locations for each of the 79 car models?

Right now only the mesh models and 2d keypoints projected on images have been released. There's no way for us to associate 2d keypoints to 3d at this point.

car_instance better evaluating metrics

Since the challenge has ended, it turns out to be a difficult one. Especially due to the translation vector estimation: 2.8 meter threshold is very challenging. From my personal opinion, the car pose estimation (for translation vector) could have larger threshold if the car is far away. For the current system, no matter how far the detected cars are from the camera (e.g., 5 meters or 150 meters), the current evaluating metric treats them unanimously. However, for the future evaluation metric, I would suggest a linear threshold: if the car is close to the camera, then stricter threshold is required and if the car is far away, a more tolerant threshold is considered (as illustrated in the following image):

transcircle

Such proposed system would also make sense for real autonomous driving scenario. Hope it makes sense for the future evaluation metrics. And thank you for organizing this very fun challenge.

Night/Snow/Rain images

Hello everyone,

thanks for the effort in building a dataset containing multiple sensor modalities!
In your paper, you claim that the set already contains night images and ones taken during snow/rain.
I was wondering, which of the files at http://apolloscape.auto/self_localization.html provides those, since my bandwidth is very slow, resulting in painfully long download speeds.
I did not see any night/snow/rain images in Road 15/16/17 so far, so I figured I might as well ask you guys.

Thanks in advance
Marc

Ps: Im specifically interested in pointclouds taken during night drives.

Failed to initialize GLEW

Hi, after running install.sh
(beforehand I need to run
sudo apt install libeigen3-dev,
sudo apt-get install libglfw3-dev libgles2-mesa-dev,
sudo apt-get install libglew-dev
to install all the required libraries)

I have the render_egl.pyx file.

However, by executing demo.py, I have the error
"Failed to initialize GLEW".

Any clue to overcome it?
Many thanks!

the ground-truth of lane marking

Hi,

I find something strange in your release dataset as following.

  1. The double solid yellow lane marking isn't been labeled.
    capture

  2. The color in the ground-truth image doesn't match the description in laneMarkDetection.py
    Take solid white lane marking for example:
    The color in ground-truth image is (180,173,43)
    But, the color in laneMarkDetection.py is (70, 130, 180).

Many thanks!

Missing car models

I'm trying to run the sample from the readme and I'm getting an error due to missing models. I double checked the download, but could not manage to find certain models.

> python render_car_instances.py --split='sample_data' --image_name='180116_053951969_Camera_5' --data_dir='../apolloscape/3d_car_instance_sample/'
Test visualizer
INFO:root:loading 79 car models
Traceback (most recent call last):
  File "render_car_instances.py", line 298, in <module>
    visualizer.load_car_models()
  File "render_car_instances.py", line 59, in load_car_models
    with open(car_model) as f:
IOError: [Errno 2] No such file or directory: '../apolloscape/3d_car_instance_sample/car_models//biaozhi-3008.pkl' 

Following models are missing:
biaozhi-3008
bieke-yinglang-XT.pkl
biyadi-2x-F0.pkl
changanbenben.pkl
jilixiongmao-2015.pkl
lingmu-aotuo-2009.pkl
lingmu-SX4-2012.pkl
dazhongmaiteng.pkl

Result server down?

Hi, I have tried to submit my result for Car 3D instance. But it seems that the server is not responding to give the result feedback. Can we still evaluate our result on the platform?

Lidar dataset Capture Freq

Hi,

I want to know how to interpret the sampling frequency of lidar dataset. For example, it is mentioned that 3D Lidar Object Detection and Tracking dataset :
"The 3D Lidar object detection and tracking benchmark consists of about 53min training sequences and 50min testing sequences. The data is captured at 10 frames per second and labeled at 2 frames per second."

How does this translates to number of points per seconds ?

Caculate depth from disparity map

As I know, we can calculate depth from disparity follow: depth = baseline * focal / disparity.
I'm using stereo_train_01.zip (4.4GB) to calculate depth map from disparity. I can find focal from intrinsic.txt but cannot find baseline of camera.
Can you guides help me:

  • Where can i find baseline parameter of stereo camera?
    or
  • How to calculate depth from disparity map using Apoloscapes Dataset?
    Many thanks!

Camera intrinsic for camera 6

Hi, (at least for car instance track) I found out that in render_car_instances.py
when try to acquire camera intrinsic, it's dependent upon the names of the cameras:
intrinsic = self.dataset.get_intrinsic(image_name). (That's for camera 5 and camera 6, there are two different intrinsic. )

However, after close examination, images from camera 6 (using camera 6 intrinsic) always exhibit misalignment (171206_065804067_Camera_6):

171206_065804067_camera_6

And after enforcing all images using intrinsic from camera 5, we eliminate the misalignment. So I guess images from camera 6 should also use intrinsic from camera 5, isn't? Hence the code should be changed accordingly as well.
E.g.: intrinsic = self.dataset.get_intrinsic("Camera_5")
Mesh image from camera 6 using camera 5 intrinsic:

171206_065804067_camera_6

scene_parsing_seg datasets broken

hi, i have downloaded the scene_parsing_seg datasets for road02 03 04,
when i want to change my trainid in labels png files,
i found that the png file is broken down,and could not open up
most of them resides in road04/record12/cam6/

another issue is when i want to train my network,
i find the num of images and labels is not equal.
this occurs only when using road03 or road04,road02 works fine

i guess this 70G and 40G files is too large,so when downloading files,maybe the network is
not so consistent to receive it(though my network is not too bad,it maintains at 2Mb/s).

so i beg that it is convenient for your guys to check the files in your server is completed
and could put on a baiduyunpan for domestic users to accelerate downloading.

thanks too much. happy to see apollo goes well.

The submitted data format of lane segmentation

I submitted the .zip file as this format:
test.zip
├── test
│ ├── road05
│ │ ├── image_name1.png
│ │ ├── image_name2.png
But I got a 'fail' feedback. I found that the description of data format between LanemarkDiscription.pdf and http://apolloscape.auto/lane_segmentation.html is different. Do I need to upload a .csv file within the .zip file? Could you show me the correct format of submitted data?
Besides, the ignore label of Camera6's image is incorrect. It should be located at the bottom left corner of image, but now at bottom right instead.
171206_042435834_camera_6_bin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.