Giter Club home page Giter Club logo

pandaset-devkit's Introduction

pandaset-devkit

Header Animation

Overview

Welcome to the repository of the PandaSet Devkit.

Dataset

Download

To download the dataset, please visit the official PandaSet webpage and sign up through the form. You will then be forwarded to a page with download links to the raw data and annotations.

Unpack

Unpack the archive into any directory on your hard disk. The path will be referenced in usage of pandaset-devkit later, and does not have to be in the same directory as your scripts.

Structure

Files & Folders

.
├── LICENSE.txt
├── annotations
│   ├── cuboids
│   │   ├── 00.pkl.gz
│   │   .
│   │   .
│   │   .
│   │   └── 79.pkl.gz
│   └── semseg  // Semantic Segmentation is available for specific scenes
│       ├── 00.pkl.gz
│       .
│       .
│       .
│       ├── 79.pkl.gz
│       └── classes.json
├── camera
│   ├── back_camera
│   │   ├── 00.jpg
│   │   .
│   │   .
│   │   .
│   │   ├── 79.jpg
│   │   ├── intrinsics.json
│   │   ├── poses.json
│   │   └── timestamps.json
│   ├── front_camera
│   │   └── ...
│   ├── front_left_camera
│   │   └── ...
│   ├── front_right_camera
│   │   └── ...
│   ├── left_camera
│   │   └── ...
│   └── right_camera
│       └── ...
├── lidar
│   ├── 00.pkl.gz
│   .
│   .
│   .
│   ├── 79.pkl.gz
│   ├── poses.json
│   └── timestamps.json
└── meta
    ├── gps.json
    └── timestamps.json

Instructions

Setup

  1. Create a Python>=3.6 environment with pip installed.
  2. Clone the repository git clone [email protected]:scaleapi/pandaset-devkit.git
  3. cd into pandaset-devkit/python
  4. Execute pip install .

The pandaset-devkit is now installed in your Python>=3.6 environment and can be used.

Usage

To get familiar with the API you can point directly to the downloaded dataset.

Initialization

First, we need to create a DataSet object that searches for sequences.

>>> from pandaset import DataSet
>>> dataset = DataSet('/data/pandaset')

Afterwards we can list all the sequence IDs that have been found in the data folder.

>>> print(dataset.sequences())
['002',...]

Since semantic segmentation annotations are not always available for scenes, we can filter to get only scenes that have both semantic segmentation as well as cuboid annotations.

>>> print(dataset.sequences(with_semseg=True))
['002',...]

Now, we access a specific sequence by choosing its key from the previously returned list, in this case sequence ID '002'

>>> seq002 = dataset['002']

API Reference: DataSet class

Loading

The devkit will automatically search the sequence directory for available sensor data, metadata and annotations and prepare the directory to be loaded explicitly. At this point no point clouds or images have been loaded into memory. To execute the loading of sensor data and metadata into memory, we simply call the load() method on the sequence object. This will load all available sensor data and metadata.

>>> seq002.load()

If only certain data is required for analysis, there are more specific methods available, which can also be chained to each other.

>>> seq002.load_lidar().load_cuboids()

API Reference: Sequence class

Data Access

LiDAR

The LiDAR point clouds are stored as pandas.DataFrames and therefore you are able to leverage their extensive API for data manipulation. This includes the simple return as a numpy.ndarray.

>>> pc0 = seq002.lidar[0]
>>> print(pc0)
                 x           y         z     i             t  d
index                                                          
0       -75.131138  -79.331690  3.511804   7.0  1.557540e+09  0
1      -112.588306 -118.666002  1.423499  31.0  1.557540e+09  0
2       -42.085902  -44.384891  0.593491   7.0  1.557540e+09  0
3       -27.329435  -28.795053 -0.403781   0.0  1.557540e+09  0
4        -6.196208   -6.621082  1.130009   3.0  1.557540e+09  0
            ...         ...       ...   ...           ... ..
166763   27.670526   17.159726  3.778677  25.0  1.557540e+09  1
166764   27.703935   17.114063  3.780626  27.0  1.557540e+09  1
166765   27.560664   16.955518  3.767948  18.0  1.557540e+09  1
166766   27.384433   16.783824  3.752670  22.0  1.557540e+09  1
166767   27.228821   16.626038  3.739154  20.0  1.557540e+09  1
[166768 rows x 6 columns]
>>> pc0_np = seq002.lidar[0].values  # Returns the first LiDAR frame in the sequence as an numpy ndarray
>>> print(pc0_np)
[[-7.51311379e+01 -7.93316897e+01  3.51180427e+00  7.00000000e+00
   1.55753996e+09  0.00000000e+00]
 [-1.12588306e+02 -1.18666002e+02  1.42349938e+00  3.10000000e+01
   1.55753996e+09  0.00000000e+00]
 [-4.20859017e+01 -4.43848908e+01  5.93490847e-01  7.00000000e+00
   1.55753996e+09  0.00000000e+00]
 ...
 [ 2.75606640e+01  1.69555183e+01  3.76794770e+00  1.80000000e+01
   1.55753996e+09  1.00000000e+00]
 [ 2.73844334e+01  1.67838237e+01  3.75266969e+00  2.20000000e+01
   1.55753996e+09  1.00000000e+00]
 [ 2.72288210e+01  1.66260378e+01  3.73915448e+00  2.00000000e+01
   1.55753996e+09  1.00000000e+00]]

The LiDAR points are stored in a world coordinate system; therefore it is not required to transform them using the vehicle's pose graph. This allows you to query all LiDAR frames in the sequence or a certain sampling rate and simply visualize them using your preferred library.

Instead of using always all of the point clouds available, it is also possible to simply slice the lidar property as one is used from python lists.

>>> pc_all = seq002.lidar[:]  # Returns all LiDAR frames from the sequence
>>> pc_sampled = seq002.lidar[::2]  # Returns every second LiDAR frame from the sequence

In addition to the LiDAR points, the lidar property also holds the sensor pose (lidar.poses) in world coordinate system and timestamp (lidar.timestamps) for every LiDAR frame recorded. Both objects can be sliced in the same way as the lidar property holding the point clouds.

>>> sl = slice(None, None, 5)  # Equivalent to [::5]  # Extract every fifth frame including sensor pose and timestamps
>>> lidar_obj = seq002.lidar
>>> pcs = lidar_obj[sl]
>>> poses = lidar_obj.poses[sl]
>>> timestamps = lidar_obj.timestamps[sl]
>>> print( len(pcs) == len(poses) == len(timestamps) )
True

The LiDAR point clouds include by default the points from both the mechanical 360° LiDAR and the front-facing LiDAR. To select only one of the sensors, the set_sensor method is available.

>>> pc0 = s002.lidar[0]
>>> print(pc0.shape)
(166768, 6)
>>> s002.lidar.set_sensor(0)  # set to include only mechanical 360° LiDAR
>>> pc0_sensor0 = s002.lidar[0]
>>> print(pc0_sensor0.shape)
(106169, 6)
>>> s002.lidar.set_sensor(1)  # set to include only front-facing LiDAR
>>> pc0_sensor1 = s002.lidar[0]
>>> print(pc0_sensor1.shape)
(60599, 6)

Since the applied filter operation leaves the original row index intact for each point (relevant for joining with SemanticSegmentation), one can easily test that no point was left out in filtering:

>>> import pandas as pd
>>> pc0_concat = pd.concat([pc0_sensor0, pc0_sensor1])
>>> print(pc0_concat.shape)
(166768, 6)
>>> print(pc0 == pc0_concat)
           x     y     z     i     t     d
index                                     
0       True  True  True  True  True  True
1       True  True  True  True  True  True
2       True  True  True  True  True  True
3       True  True  True  True  True  True
4       True  True  True  True  True  True
      ...   ...   ...   ...   ...   ...
166763  True  True  True  True  True  True
166764  True  True  True  True  True  True
166765  True  True  True  True  True  True
166766  True  True  True  True  True  True
166767  True  True  True  True  True  True
[166768 rows x 6 columns]
>>> print((~(pc0 == pc0_concat)).sum())  # Counts the number of cells with `False` value, i.e., the ones where original point cloud and concatenated filtered point cloud differentiate
x    0
y    0
z    0
i    0
t    0
d    0
dtype: int64

API Reference: Lidar class

Cameras

Since the recording vehicle was equipped with multiple cameras, first we need to list which cameras have been used to record the sequence.

>>> print(seq002.camera.keys())
['front_camera', 'left_camera', 'back_camera', 'right_camera', 'front_left_camera', 'front_right_camera']

The camera count and names should be equal for all sequences.

Each camera name has its recordings loaded as Pillow Image object, and can be accessed via normal list slicing. In the following example, we select the first image from the front camera and display it using the Pillow library in Python.

>>> front_camera = seq002.camera['front_camera']
>>> img0 = front_camera[0]
>>> img0.show()

Afterwards the extensive Pillow Image API can be used for image manipulation, conversion or export.

Similar to the Lidar object, each Camera object has properties that hold the camera pose (camera.poses) and timestamp (camera.timestamps) for every recorded frame, as well as the camera intrinsics (camera.intrinsics). Again, the objects can be sliced the same way as the Camera object:

>>> sl = slice(None, None, 5)  # Equivalent to [::5]
>>> camera_obj = seq002.camera['front_camera']
>>> pcs = camera_obj[sl]
>>> poses = camera_obj.poses[sl]
>>> timestamps = camera_obj.timestamps[sl]
>>> intrinsics = camera_obj.intrinsics 

API Reference: Camera class

Meta

In addition to the sensor data, the loaded dataset also contains the following meta information:

  • GPS Positions
  • Timestamps

These can be directly accessed through the known list slicing operations, and read in their dict format. The following example shows how to get the GPS coordinates of the vehicle on the first frame.

>>> pose0 = seq002.gps[0]
>>> print(pose0['lat'])
37.776089291519924
>>> print(pose0['long'])
-122.39931707791749

API Reference: GPS class

API Reference: Timestamps class

Annotations

Cuboids

The LiDAR Cuboid annotations are also stored inside the sequence object as a pandas.DataFrames for each timestamp. The position coordinates (position.x,position.y,position.z) are located at the center of a cuboid. dimensions.x is the width of the cuboid from left to right, dimensions.y is the length of the cuboid from front to back and dimensions.z is the height of the cuboid from top to bottom.

>>> cuboids0 = seq002.cuboids[0]  # Returns the cuboid annotations for the first LiDAR frame in the sequence
>>> print(cuboids0.columns)
Index(['uuid', 'label', 'yaw', 'stationary', 'camera_used', 'position.x',
       'position.y', 'position.z', 'dimensions.x', 'dimensions.y',
       'dimensions.z', 'attributes.object_motion', 'cuboids.sibling_id',
       'cuboids.sensor_id', 'attributes.rider_status',
       'attributes.pedestrian_behavior', 'attributes.pedestrian_age'],
      dtype='object')

API Reference: Cuboids class

Semantic Segmentation

Analogous to the cuboid annotations, the Semantic Segmentation can be accessed using the semseg property on the sequence object. The index of each Semantic Segmentation data frame corresponds to the index of each LiDAR point cloud data frame, and can be joined using the index.

>>> semseg0 = seq002.semseg[0]  # Returns the semantic segmentation for the first LiDAR frame in the sequence
>>> print(semseg0.columns)
Index(['class'], dtype='object')
>>> print(seq002.semseg.classes)
{'1': 'Smoke', '2': 'Exhaust', '3': 'Spray or rain', '4': 'Reflection', '5': 'Vegetation', '6': 'Ground', '7': 'Road', '8': 'Lane Line Marking', '9': 'Stop Line Marking', '10': 'Other Road Marking', '11': 'Sidewalk', '12': 'Driveway', '13': 'Car', '14': 'Pickup Truck', '15': 'Medium-sized Truck', '16': 'Semi-truck', '17': 'Towed Object', '18': 'Motorcycle', '19': 'Other Vehicle - Construction Vehicle', '20': 'Other Vehicle - Uncommon', '21': 'Other Vehicle - Pedicab', '22': 'Emergency Vehicle', '23': 'Bus', '24': 'Personal Mobility Device', '25': 'Motorized Scooter', '26': 'Bicycle', '27': 'Train', '28': 'Trolley', '29': 'Tram / Subway', '30': 'Pedestrian', '31': 'Pedestrian with Object', '32': 'Animals - Bird', '33': 'Animals - Other', '34': 'Pylons', '35': 'Road Barriers', '36': 'Signs', '37': 'Cones', '38': 'Construction Signs', '39': 'Temporary Construction Barriers', '40': 'Rolling Containers', '41': 'Building', '42': 'Other Static Object'}

API Reference: SemanticSegmentation class

Header Animation

pandaset-devkit's People

Contributors

ltriess avatar nisseknudsen avatar pfmark avatar rmeertens avatar xpchuan-95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pandaset-devkit's Issues

Total number of LIDAR sweeps in the PANDASET Dataset

Hello,
As per the information in the site you have mentioned that the total frames are 16,000+ LiDAR sweeps. But even after combining all the three parts we could only find around 8500+ frames only. So, If any additional dataset that is available apart from those 3 parts can you share. Also there are some missing sequences like 49,60,61 frames are missing, is it ok? Please kindly let us know. 

Unable to download the dataset

I am a PhD student in the university and I follow the download instruction on the website to submit my education e-mail.
But I did not receive any download link in my email and I have tried several times.

pip package

Dear Developers,

Thank you for providing your devkit to access Padaset data easily.
I was wondering if it's possible for you to provide pip package for an easier install and adding it as requirements in various projects.
Currently using
-e git://github.com/scaleapi/pandaset-devkit.git#egg=pandaset-devkit
in requiremens.txt fails due to the non-standard file structure of this repository.
So the request would be very useful to deploy scripts utilizing this devkit.

Thank you in advance!
Daniel

Questions about color of camera data

Hi there, I felt that there are some recolor filters applied on the image, which makes the color pattern a bit weird. Is it possible if you provide more detail on how the raw images are preprocessed?
Thanks in advance.

Images

Feature Request: Mechanism to reconstruct original sensor view image

Hello,

can you provide a mechanism to restore the original sensor view image (cylindrical projection of the point cloud)? This is necessary for many semantic segmentation methods. It is possible to compute the azimuth and elevation angles from the point list to construct the point cloud, however there are a few issues:

  • this usually does not lead to the correct projection, because of multiple point occlusions due to imprecision
  • it is not clear where the ego coordinate system is located (related to issue #67) -> origin of the sensor? location of the gps?
  • missing measurements are not included in the point list, therefore a simple reshape of the list to an image is not possible (in contrast to nuScenes dataset)
  • if the point cloud is ego motion corrected, it is not possible to reconstruct a dense cylindrical projection purely from the point list
  • the projection will suffer from avoidable information loss

FYI @nisseknudsen

Cannot download the dataset

I am a postgraduate in the university and I follow the download instruction on the website to submit my education e-mail. But I have not receive any download link in my email and I have tried several times. Could you please give me a download link? Thanks a lot!

Bounding box for 2D detection

First off, thanks for the great dataset!

I would like to use the front camera images to train a network for regular object detection on camera images with 2D bounding boxes. You provide the cuboids and projection in the camera image, but would it also be possible to obtain a 2D bounding for the images?

i want to use this for personal use

i want to use this for personal use but i cant sign up because i dont have a job and it doesn't like gmail emails. i just wanna explore point clouds of street intersections without graduating

Providede additional raw sweeplidar and is not a rigid transformation with existing sweeplidar 3D points

Hi! I wish to compute a rigid affine transformation between additional provided raw sweeplidar and existing sweeplidar. However, not matter I use moorse-puesodo inverse to compute or use ICP algorithmn to compute, I can not acqurie a feasible affine transformation between the two.

In visualization, I find they are in correspondence but exists distorion in existing sweeplidar. May I know what additional operation you apply to transfer the raw sweeplidar to current sweeplidar points?

Empty cuboids when plot

When I plot the point cloud and the corresponding cuboids, there seem to be many empty cuboids that no lidar points inside it, is that normal?
cuboids_issue

Bad memory management - sequences stored in dataset object

Running simple code such as

from pandaset import DataSet
seq_num = 0
dataset = DataSet('...')
for sequence in dataset.sequences():
    print("Sequence {}, {} of {}".format(sequence, seq_num, len(dataset.sequences())))
    seq = dataset[sequence]
    seq.load()
    del seq
    seq_num += 1

Quickly leads to sigkill due to lack of memory. Why? Because loaded sequences are also stored in the DataSet object, such that after deleting seq, you can still access the loaded data from dataset[sequence] without doing .load() again.

So is there any practical way of iterating through the data? The dataset class does not support item deletion, the sequence class does not support copying... The only way I've found was to delete the dataset object every iteration which slows down things unnecessarily.

Seems an .unload() method would be simple enough. Thank you.

Can't download dataset with university email

Hi!

I am a PhD student working as a researcher at Technical University of Cluj-Napoca. Although I offer my uni email, an email with a link to download the dataset is not sent to me.

I can successfully download datasets from KITTI, Waymo, Lyft and even the Audi dataset using this email, but not yours. Why is this so? Only company emails work? Or has the dataset become unavailable?

How to get all the labels of panda gt

For the overlap area between mechanical 360° LiDAR and front-facing LiDAR, moving objects received two cuboids to compensate for synchronization differences of both sensors. If cuboid is in this overlapping area and moving, this value is either 0 (mechanical 360° LiDAR) or 1 (front-facing LiDAR). All other cuboids have value -1.

This is the git documentation,sensor_id,I extracted the label with sensor_id of 1, and found that many targets were missing in the visualization.How can I get all the labels of pandagt

Sensors: extrinsic calibration coordinate system

There is the file docs/static_extrinsic_calibration.yaml, which holds the mounting position (extrinsic calibration) of the sensors.

If I look, for example, at the main_pandar64 sensor, its mounting position is the unity transformation, which means that the mounting position coordinate system equals the main_pandar64 sensor coordinate system, i.e. has its origin at this sensor. What is, however, the transformation from the mounting position coordinate system to the ego coordinate system, which has its origin at the middle of the rear axis?

Tutorial - Extract points from cuboids

Create a tutorial which takes a cuboid and a point cloud and returns only the part of the point cloud which is inside the point cloud.
Target is to get for example "typical car shapes".

Points to Lidar Channels

Hey guys,

Is there a way to map lidar points to the lidar channel that produced them? Specifically those channels reported here: https://hesaiweb2019.blob.core.chinacloudapi.cn/uploads/Pandar64_User's_Manual.pdf

I've tried something simple along the lines of:

theta = np.arctan2(points[..., 2], points[..., 1])

but the visualisations don't look quite right when i colour points red > 3 * np.pi / 180 and colour points yellow <3 * np.pi/180. According to the hesai data sheet I was expecting to see 4 distinct bands but that didn't work out :) attached is what I see (also included a green rectangle whose corner is (0, 0, 0) which is 100 units long

image

My impression is that the provided lidar pose is in fact the baselink on the vehicle as opposed to the center of the lidar which is in turn making my "find the lidar channel" logic work incorrectly. The reason I think this is because if I add around 2m/3m to the "lidar_to_ego" corrected points I get the following:
image

But of course "about 3m" isn't quite the whole story because the lidary unit has a little tilt as well :) I guess what I need is the transform from baselink to the lidar sensor?

I've also attached the notebook I used to produce the images
view.tar.gz

Any pointers?

Is Tracking Information provided?

Dear,
I am going through the dataset details, as much as I can find. However, although its not mentioned, I just wanted to confirm if pandaset provides tracking information, i.e. a static id unique to one object's bounding box in all frame sequences?

Download error

I submitted the download form for many times, but I never receive your any reply with links for now.

Apply Patch File

@xpchuan-95 : Hey Peter, I tried sending you emails, but for some reason they bounced all three...

To make sure you got the information, here is my email to you:

Hi Peter,

apologies for that! It is because I hadn't finished implementing the intrinsics change.
Could you do a master-merge or rebase into your branch, and apply the patch file to geometry.py? Possibly, I have overseen anything else, but this should help for the first start.

Best,
Nisse

and this is the email attachment:

geometry.zip

about camera model

The image data seems to be of very good quality, I don’t know if I can provide a purchase channel for the camera,Thank you very much!

Data definitions and scope

Hi

Thanks for the comprehensive dataset I wanted to make sure that I understood the data provided.

In the LiDAR data there is an 'i' and 'd' value - I was thinking 'i' is the Identifier of the object but not sure if 'd' is a measure of distance since it has values from 0 to 1.

Also is there a chance to limit the LiDAR data to only what the front and back camera sees?

Thanks
Amine

Intensity/Range spherical image from Pandar64 point cloud

I have some problems when I project Pandar64 3D cloud point into a spherical image. Here the little snippet:

# load dataset
dataset = pandaset.DataSet("/path/to/dataset")
seq001 = dataset["001"]
seq001.load()


np.set_printoptions(precision=4, suppress=True)

# generate projected points
seq_idx = 0
lidar = seq001.lidar

# useless pose ?
pose = lidar.poses[seq_idx] 
pose_homo_transformation = geometry._heading_position_to_mat(pose['heading'], pose['position'])
print(pose_homo_transformation)

data = lidar.data[seq_idx]
# this retrieve both pandarGT and pandar64
both_lidar_clouds = lidar.data[seq_idx].to_numpy()
# get only points belonging to pandar 64 mechanical lidar
idx_pandar64 = np.where(both_lidar_clouds[:, 5] == 0)[0]
points3d_lidar_xyzi = both_lidar_clouds[idx_pandar64][:, :4]
print("number of points of mechanical lidar Pandar64:", len(idx_pandar64))
print("number of points of lidar PandarGT:", len(data)-len(idx_pandar64))

num_rows = 64                 # the number of laser beams
num_columns = int(360 / 0.2)  # horizontal field of view / horizontal angular resolution

# vertical fov of pandar64, 40 deg
fov_up = math.radians(15)
fov_down = math.radians(-25)

# init empty imgages
intensity_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)
range_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)

# get abs full vertical fov
fov = np.abs(fov_down) + np.abs(fov_up) 

# transform points
# R = pose_homo_transformation[0:3, 0:3]
# t = pose_homo_transformation[0:3, 3]
# # print(R)
# # print(t)
# points3d_lidar_xyzi[:, :3] = points3d_lidar_xyzi[:, :3] @ np.transpose(R)

# get depth of all points
depth = np.linalg.norm(points3d_lidar_xyzi[:, :3], 2, axis=1)

# get scan components
scan_x = points3d_lidar_xyzi[:, 0]
scan_y = points3d_lidar_xyzi[:, 1]
scan_z = points3d_lidar_xyzi[:, 2]
intensity = points3d_lidar_xyzi[:, 3]

# get angles of all points
yaw = -np.arctan2(scan_y, scan_x)
pitch = np.arcsin(scan_z / depth)

# get projections in image coords
proj_x = 0.5 * (yaw / np.pi + 1.0)                  # in [0.0, 1.0]
proj_y = 1.0 - (pitch + abs(fov_down)) / fov        # in [0.0, 1.0]

# scale to image size using angular resolution
proj_x *= num_columns                              # in [0.0, width]
proj_y *= num_rows                                 # in [0.0, heigth]

# round and clamp for use as index
proj_x = np.floor(proj_x)
out_x_projections = proj_x[np.logical_or(proj_x > num_columns, proj_x < 0)] # just to check how many points out of image  
proj_x = np.minimum(num_columns - 1, proj_x)
proj_x = np.maximum(0, proj_x).astype(np.int32)   # in [0,W-1]

proj_y = np.floor(proj_y)
out_y_projections = proj_y[np.logical_or(proj_y > num_rows, proj_y < 0)] # just to check how many points out of image
proj_y = np.minimum(num_rows - 1, proj_y)
proj_y = np.maximum(0, proj_y).astype(np.int32)   # in [0,H-1]

print("projections out of image: ", len(out_x_projections), len(out_y_projections))
print("percentage of points out of image bound: ", len(out_x_projections)/len(idx_pandar64)*100, len(out_y_projections)/len(idx_pandar64)*100)

# order in decreasing depth
indices = np.arange(depth.shape[0])
order = np.argsort(depth)[::-1]
depth = depth[order]
intensity = intensity[order]
indices = indices[order]
proj_y = proj_y[order]
proj_x = proj_x[order]

# assing to images
range_img[proj_y, proj_x] = depth
intensity_img[proj_y, proj_x] = intensity

plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(intensity_img, cmap='gray', vmin=0.5, vmax=50)#, vmin=0.5, vmax=80)
plt.show()

plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(range_img,vmin=0.5, vmax=80)
plt.show()  

This current projection gets an image cut on half in which the lower part is completely empty.

depth_projection png
intensity_projection png

I've tried to project into a spherical depth/intensity image raw data (like in the tutorial raw_depth_projection) and I've completely different results in terms of quality and resolution.

intensity_from_raw_data
depth_from_raw_data

I don't understand what kind of problem I am having, if related to the cloud reference frame, to some Pandar64 internal params that I am messing up or something else. Really appreciate some help. Thank you in advance.

About sematic segmentation data class.

I have visualized semantic annotation as below. The black points including class: Other Static Object, but I think many of them can be class into building or pole.
image

Can't download data from Scale AI

Summary

I have tried to download the PandaSet data from Scale AI, but after clicking the download button all that opens is an empty white floating box with the header 'Download Dataset' that doesn't contain any links or redirect. This happens even after I sign in.

It looks like the website is broken, I'm not sure if anyone else is getting this issue.

Are there any other ways to access this data, for example direct download links, URLs or links to where it is hosted on ScaleAI? Thanks.

Tested on

  • Ubuntu 18.04
  • Windows 10
  • Latest Chrome
  • Latest Firefox

Raw data from Pandar GT

For my algorithms, I preferably need point cloud data in its sensor's frame. This helps detecting which objects are occluded and which are not.

Would it be possible to also provide the raw data of the Pandar GT as it already has been done with the rotating lidar? @xpchuan-95 @nisseknudsen

Any plan to held a competition or maintain a benchmark?

Hi there,

thansk for open sourcing this great dataset, as far as I know there is currently no other public dataset including mems lidar data. I'm wondering if there is any plan to held a competition on kaggle / academic conference workshop , or maintain a benchmark? I think this would be great to promote the development of dense point cloud detection methods and also Scale AI & Hesai.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.