Giter Club home page Giter Club logo

dsec's Introduction

News

  • Nov. 26, 2022 - Lidar and IMU data is now available on the download page.

DSEC

DSEC: A Stereo Event Camera Dataset for Driving Scenarios

This is code accompanying the dataset and paper by Mathias Gehrig, Willem Aarents, Daniel Gehrig and Davide Scaramuzza

Visit the project webpage to download the dataset.

If you use this code in an academic context, please cite the following work:

@InProceedings{Gehrig21ral,
  author  = {Mathias Gehrig and Willem Aarents and Daniel Gehrig and Davide Scaramuzza},
  title   = {{DSEC}: A Stereo Event Camera Dataset for Driving Scenarios},
  journal = {{IEEE} Robotics and Automation Letters},
  year    = {2021},
  doi     = {10.1109/LRA.2021.3068942}
}

and

@InProceedings{Gehrig3dv2021,
  author = {Mathias Gehrig and Mario Millh\"ausler and Daniel Gehrig and Davide Scaramuzza},
  title = {E-RAFT: Dense Optical Flow from Event Cameras},
  booktitle = {International Conference on 3D Vision (3DV)},
  year = {2021}
}

Install

In this repository we provide code for loading data and verifying the submission for the benchmarks. For details regarding the dataset, visit the DSEC webpage.

  1. Clone
git clone [email protected]:uzh-rpg/DSEC.git
  1. Install conda environment to run example code
conda create -n dsec python=3.8
conda activate dsec
conda install -y -c anaconda numpy
conda install -y -c numba numba
conda install -y -c conda-forge h5py blosc-hdf5-plugin opencv scikit-video tqdm prettytable imageio
# only for dataset loading:
conda install -y -c pytorch pytorch torchvision cudatoolkit=10.2
# only for visualilzation in the dataset loading:
conda install -y -c conda-forge matplotlib

Disparity Evaluation

We provide a python script to ensure that the structure of the submission directory is correct. Usage example:

python check_disparity_submission.py SUBMISSION_DIR EVAL_DISPARITY_TIMESTAMPS_DIR

where EVAL_DISPARITY_TIMESTAMPS_DIR is the path to the unzipped directory containing evaluation timestamps. It can downloaded on the webpage or directly here. SUBMISSION_DIR is the path to the directory containing your submission.

Follow the instructions on the webpage for a detailed description of the submission format.

Optical Flow Evaluation

We provide a python script to ensure that the structure of the submission directory is correct. Usage example:

python check_optical_flow_submission.py SUBMISSION_DIR EVAL_FLOW_TIMESTAMPS_DIR

where EVAL_FLOW_TIMESTAMPS_DIR is the path to the unzipped directory containing evaluation timestamps. It can downloaded on the webpage or directly here. SUBMISSION_DIR is the path to the directory containing your submission.

Follow the instructions on the webpage for a detailed description of the submission format.

dsec's People

Contributors

knelk avatar magehrig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dsec's Issues

Left and right event rectify maps swapped?

Hello,

is it possible that rectification maps for events found in rectify_map.h5 are swapped between left and right event cameras? I tried multiple sequences from zurich_city_04 and get good rectification only when I swap the maps.

Thanks,
Antea

Regarding Ground truth Labels for object detection

Hello,
Thank you for publicly releasing this dataset. Since the dataset contains challenging illumination conditions encountered during driving, will the 2D ground-truth bounding box for object detection be provided?
This will allow for testing and evaluating object detection methods on event-camera.

Calibration model details

Hi, Im trying to use the Feature tracking application on the DSEC dataset.
I would like to know which calibration model the DSEC dataset uses for this purpose, Whether it is equidistant distortion model or radtan distortion model. Kindly provide your feedback.

Event frequency

Hi,
I have a question regarding the frequency at which the events are recorded.
The document says that @ 20 Hz the frames are triggered and the stream of events are received at 20Hz. May I know what is the minimum time resolution to record 1 event?
Thanks in advance.

Event and Image Alignment

Hello, I am working on Event-Intensity Stereo.
As the resolution of event and image is different, I want to konw how to align the events and intensity images from the event camera and RGB camera on the same side? Is it just a simple resize or linear interpolation operation?

Regarding Odometry Groundtruth

Hi Mathias,

Do you provide the groundtruth trajectories of the left event camera in the dataset? I noticed that an RTK was deployed right there.

Also, will the GT trajectory be available in the CVPRW competition?

Cheers,

yi

DSEC website is down

Hi Mathias,

The DSEC website (dsec.ifi.uzh.ch) seems to be down. Could you have a look at it when you have some time?

Thanks in advance!
Federico Paredes-Valles

Camera biases

Hi @magehrig,
Many thanks for your cool work and making the dataset available publicly! I would like to ask whether you can share the camera parameter settings for the different conditions? In the paper you state that the settings differ between daytime and nighttime, it would be great to share the biases; I'd like to collect some of my own data with the same camera and it would be nice if the data are comparable between the datasets.

Many thanks,
Tobias

Align DVS events in APS frame

According to the readme, I am assuming that in order to align the events from event camera left to rectified frame camera left (resized to 640x480), I need to first use T_01 and then use R_rect0 in the cam_to_cam.yaml.

events = event_slicer.get_events(t_end_us - 5E4, t_end_us)
p = events['p']
x = events['x']
y = events['y']
t = events['t']
coords_4d = np.stack((x, y, np.ones_like(x), np.ones_like(x)))
evt_coord_in_aps = np.matmul(t_01, coords_4d)
evt_coord_rected = np.matmul(r_rect0, evt_coord_in_aps[:3, :])

where t_end_us is retrieved from image_timestamps.txt for the corresponding APS frame.

However, when I try to render the events on image, (replacing the empty image with the resized APS frame in the render() function in scripts/events_to_video.py, the events do not align with the image edges. I'm not sure what I did wrong here.

P.S. there is a bug in render() on generating the masks, I believe. It should be

mask1 = (x >= 0) & (y >= 0) & (x < h) & (y < w)
mask[y[mask1], x[mask1]] = p[mask1]

An example of frame 000001.png in sequence zurich_city_04_e
img_nn

HDF5 packager

Hi @magehrig,

The reason for this issue is that I would like to ask you whether you are planning to share the HDF5 packager (or details of it) that you used for this dataset. I've been playing around with it a little bit, and if I try to encode event data from your sequences using int16 for location, bool for polarity, and float64 for timestamps, the files becomes 4x bigger than yours. I also tried playing around with some other variable types, but never got close to your file size. This makes me think that there is a substantial difference between our packagers, as yours is significantly more efficient in terms of memory.

Thanks making the dataset publicly available and congrats on your recent work!

Best,
Federico Paredes-Valles

Offset in event and frame timestamps

Hi,
Thank you for creating this dataset.
I'm creating a rosbag with events and images as topics. For this process, I'm using the events_left and images_rectified_left alone.
In the documentation, it's given that we need to add the t_offset to the event timestamps for synchronization with image data. I have a doubt here.

For instance, on the Zurich _city_04_b dataset.
The min and max timestamps in microseconds of the events_left are 36620700656 and 36620701155 respectively.
And the t_offset is 36607300656 microseconds.
Adding the offset to the min and max timestamps, it's 73228001312 and 73228001811 respectively.

The average exposure time stamps of the left image are computed by taking the average of the start and end exposure time in the image_exposure_timestamps_left.txt. The min and max of the average exposure time are 36607300936 and 36620701497 respectively.

It looks like adding the offset to the event_timestamps is causing a big difference between the image and the event timestamps in a dataset. Should the offset be added with the image exposure time as well to have synchronization? Please provide your feedback on this. Thanks a lot.

Do you have the payload specs?

Hi,

Just saw the readme with the sensor suite. Do you have sensor suite specs? Also, are you planning to opensource the data collection programs that go with the sensor package?

Question about rectification result

Hello,
I`ve been trying to visualize rectified events by using rectification maps that you provided but I ended with some strange blank lines (upper images) in contrast to normal visualization (lower images). Did I miss some operations that I have to provide first?

Thanks a lot,
Konrad
Rectyfication

question about disparity/timestamps.txt

I'm tring to devide event package according to the disparity label, however I find in the disparity folder, the number of timestamps can't match the number of disparity map. (e.g in interlaken_00_e, 996 and 1991 respectively).

I alse see in a disparity folder (e.g interlaken_00_e/disparity), the number of disparity map(.png file) is the same as the rgb image in the corresponding sequence(e.g interlaken_00_e/images/left/rectified), and that makes sense for they're time synchronized.

However in timestamps.txt of disparity (e.g interlaken_00_e/disparity/timestamps.txt), the number of lines in this file only counts for half the number of image timestamps (interlaken_00_e/images/left/exposure_timestamps.txt and the averaged one).

My question is why are their timestamps different in number of line? And if I want to devide event package according to the disparity label, which timestamp file should I refer to?

As I can't find explanation in the DSEC data-format page, I'm here.

Wrong Compression in Docs

The docs say that the hdf5 files are compressed with zstd. This is incorrect, at least for interlaken_00_c's left event stream.

$ h5dump -pH events.h5 | grep 'FILTER\|COMMENT'
         FILTERS {
            USER_DEFINED_FILTER {
               FILTER_ID 32001
               COMMENT blosc
         FILTERS {
            USER_DEFINED_FILTER {
               FILTER_ID 32001
               COMMENT blosc
         FILTERS {
            USER_DEFINED_FILTER {
               FILTER_ID 32001
               COMMENT blosc
         FILTERS {
            USER_DEFINED_FILTER {
               FILTER_ID 32001
               COMMENT blosc
      FILTERS {
         USER_DEFINED_FILTER {
            FILTER_ID 32001
            COMMENT blosc
      FILTERS {

This might be critical for folks that want to access such data. Either the documentation should be change or the files re-compressed and uploaded.

RGB camera frame to event camera frame view point transformation

I hope you are doing fine. I want to thank you for providing such a large-scale event stereo dataset that contains events data from event cameras and standard RGB frames from the Blackfly cameras.
I am interested to work with this amazing dataset. Hence, I start studying in detail about this dataset. While studying the dataset, I came to know about the two different stereo setups that were used to collect the event and RGB camera data . The resolution (640x480 vs 1440x1080) and the baseline (60cm vs 51cm) are different from the two cameras' stereo setup, I would be happy if you provide additional information/approach to solve the following issue.

  1. How to project / transform / map the frame cameras frames to event cameras frame(left RGB frame -> left event camera, right RGB frame -> right event camera) so that, the resolution and FOV will be the same (e.g., event camera such as DAVIS 346 provide event, as well as the standard camera frame, has the same FOV and resoution)

I am waiting for your valuable response.

The bottom of the aligned image is missing

Using this code #25 (comment), I found that there are some missing at the bottom of the aligned image, just like the picture in #25 (comment).
I think this is because these areas in the original image do not exist. But these areas have events. Is there have disparity and optical flow gt in test set? And will this negatively affect the performance of the image-based algorithms?

mapping between event frames and RGB frames

Hi,

I notice the code #25 (comment) you provided that generates event frames (640x480) from RGB frames (1440x1080). The mapping in the code could get the positions of the points of event frames in RGB frames. I am wondering whether you could provide the mappings by which we could get the positions of the points of RGB frames in event frames. Thank you so much

Lidar data

Thanks for your open source work. Can you tell me whether the corresponding lidar data in the data set is open source? I can't find it on the website. I'm looking forward to your reply!

Reprojecting disparity causes grid artifacts.

Thanks for your impressive work.
I am working on rectifying the left event camera and right frame camera and reproject disparity to the view of my rectified version. The first rectification step is well rectified thanks to the accurate intrinsic and extrinsic camera matrices. However, when I reproject the official ly released disparity maps, which are aligned with camRect0 (rectified left event camera), to the view of my rectified version, I find obvious grid artifacts on the reprojected disparity map, seen in the following figure.
image
Could you kindly help me figure out any reason or solution? Thank you so much.

OSError: Can't read data (can't open directory)

when i want to read events from events.h5 by using h5py ,i met this error:

 File "f:/event_data/DSEC/DSEC-main/test.py", line 32, in <module>
    for data in temp['events']['p']:
  File "E:\anaconda\envs\pytorch\lib\site-packages\h5py\_hl\dataset.py", line 695, in __iter__
    yield self[i]
  File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "E:\anaconda\envs\pytorch\lib\site-packages\h5py\_hl\dataset.py", line 741, in __getitem__
    return self._fast_reader.read(args)
  File "h5py\_selector.pyx", line 370, in h5py._selector.Reader.read
OSError: Can't read data (can't open directory)

this is my code

temp = h5py.File(r'F:\event_data\DSEC\thun_01_a_events_right\events.h5')
print(temp['events']['p'][:])

i am sure that i download the correct data from the website, so could you please give me some solutions?

Can't convert events to video

Hi.

I'm trying to run events_to_video.py, but AssertionError is occurred.
My env and error code is like below.

OS: Ubuntu 20.04
Python: 3.8.10
Not use conda but venv

Error code is here.

(DSEC) user@user:~/python_ws/DSEC/scripts$ python3 events_to_video.py event_file /home/user/Datasets/DSEC/train/interlaken_00_c/events/left/events.h5 
Traceback (most recent call last):
  File "events_to_video.py", line 43, in <module>
    writer = skvideo.io.FFmpegWriter(video_filepath)
  File "/home/user/python_ws/DSEC/lib/python3.8/site-packages/skvideo/io/ffmpeg.py", line 88, in __init__
    super(FFmpegWriter,self).__init__(*args, **kwargs)
  File "/home/user/python_ws/DSEC/lib/python3.8/site-packages/skvideo/io/abstract.py", line 366, in __init__
    assert str.encode(
AssertionError: Unknown encoder extension: .h5

If this issue is duplicated, I'm sorry, but could you attach the link?

Thank you.

Requesting for ground truth depth image

Hi! I am working on event stereo to estimate depth from the scenario. Since the dataset only provides ground truth disparity maps, may I have the ground truth depth images for all the sequences? Or, can I know the parameter of focal length and baseline so that I can generate depth from disparity? Thanks!

Rectification between event and RGB camera

Thank you very much for your hard work on the dataset!

From what I understand, the DSEC data set seems to have performed rectification between RGB cameras and between events.
And, it seems to provide a ground disparity map corresponding to the rectification. I was looking for rectification between the event and RGB while maintaining the ground truth disparity.

When I checked all the related issues, it seems that #6 gave a similar answer (thanks very much to the authors...)
First, I tried to apply the method using the opencv library that you pointed out. (opencv's stereoRectify)
However, this seems to be a new rectify, in which case it seems that the provided ground truth disparity map cannot be used, right??
(This is because when a new rectification is performed, it is moved to a new camera coordinate)

So, next, I tried to apply the manual rectify method you suggested in #6 while using the rectified image as it is without deformation. As you mentioned, I can use standard opencv functions to rectify the image according to the new rectification. First of all, I try to match the right distorted event camera to the left rectified RGB camera. In case of the event camera, rectification map can be obtained as follows:

coords = np.stack(np.meshgrid(np.arange(width), np.arange(height))).reshape((2, -1)).astype("float32")
term_criteria = (cv2.TERM_CRITERIA_MAX_ITER | cv2.TERM_CRITERIA_EPS, 100, 0.001)
points = cv2.undistortPointsIter(coords, K, dist_coeffs, R_rect_unrect, K_rect, criteria=term_criteria)
inv_map = points.reshape((height, width, 2))

However, in this case, I can't know the R_rect_unrect and K_rect, because the parameters mean R and K of rectified event camera from rectified RGB camera (not event camera), it is not provided in the calibration file. Can it be obtained simply by properly using extrinsic and intrinsic of calibration?

Maybe I misunderstood? I would like to know if it is possible to newly obtain rectified R and K of event camera from RGB camera manually with a calibration file.

Once again, thank you for doing this work and answering many of questions!

Help me understand this line of code of indexing

Hi,
Thank you for the work.

I am sorry for asking a novice question but I can't understand the indexing in the following line. How this is calculating the spatial location for event accumulation?

index = H * W * tlim.long() + \
W * ylim.long() + \
xlim.long()
voxel_grid.put_(index[mask], interp_weights[mask], accumulate=True)

Thanks in advance.

Regards,
Sayed

Baseline implementation details

Hi Mathias,

First of all, thank you for your excellent work!
As described in DSEC dataset paper, you use voxel grid events representation to replace the event queues, I want to know the exact number of voxel grid bins and whether the DDES embedding module (Continuous fully-connected layer) are used.

Looking forward to your reply.

GPS Data

Hello, I am going to do something about visual place recognition using your datasets, but I cannot find the value of the GPS. Can you tell me how can I find the value of the GPS? Thank you very much.

AssertionError: Unknown encoder extension:

Hi everyone,

It's my first attempt with DSEC data set, i'm trying to visualize the event file, but i keep on finding the same error, i checked in some forums and found that problem came from the instalation of ffmpeg, as far as i can tell, it's well installed on my Macos
& ubuntu machines , do you have any advices ?

Extracting fundamental matrix

Hi, first I would like to thank you very much for providing this dataset. I amm a newbie to event cameras and I would like to implement my first algorithm for 3D reconstruction. One thing I need for it is Fundamental Matrix which relates corresponding points in stereo event cameras and can be used to calculate epipolar line. I have never worked with yaml calibration files before and I am a little confused about the amount of data in it. Is the matrix that I am looking one the provided T_cn_cnm1 and if so which one? Also all of them is size 4x4 which also confues me because I have been reading that for 2D camera matrics they should be 3x3.

Thanks a lot!

Unable to read h5 event data

Hello,

I am having issues with accessing data in the events.h5 files. I am using the fresh Anaconda environment as per your instructions provided in the readme with all dependencies installed properly. Here is the Python code snippet:

import h5py
import numpy as np
import tables as tb

filename = "/home/antea/datasets/dsec/events.h5"

h5f = h5py.File(filename, "r")
events = dict()

for dset_str in ['p', 'x', 'y', 't']:
  events[dset_str] = h5f['events/{}'.format(dset_str)]
 
print(events['x'])
print(events['x'][0])

and here is the output I get:

<HDF5 dataset "x": shape (129563187,), type "<u2">
Traceback (most recent call last):
  File "h5.py", line 14, in <module>
    print(events['x'][0])
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "/home/antea/.local/lib/python3.8/site-packages/h5py/_hl/dataset.py", line 790, in __getitem__
    self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5d.pyx", line 192, in h5py.h5d.DatasetID.read
  File "h5py/_proxy.pyx", line 112, in h5py._proxy.dset_rw
  OSError: Can't read data (can't open directory: /usr/local/hdf5/lib/plugin)

Basically, I can access all the information about the dataset (shape, type, etc.), but I can't read the actual data. Do you have any idea what might be the problem? I am not facing this issue when reading the rectify_map.h5 files, only the events.

Thanks,
Antea

Accessing data with blosc-hdf5-plugin in windows 10

Hello,

I am trying to access the events data using the blosc-hdf5-plugin package but it is not available for anaconda installed in Windows. Is there any way to access data using the given code in windows? I am getting the following error without it as pointed in another issue-->

Traceback (most recent call last):
File "", line 1, in
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\Ashwin\anaconda3\envs\optical_flow\lib\site-packages\h5py-2.10.0-py3.6-win-amd64.egg\h5py_hl\dataset.py", line 573, in getitem
self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5d.pyx", line 182, in h5py.h5d.DatasetID.read
File "h5py_proxy.pyx", line 130, in h5py._proxy.dset_rw
File "h5py_proxy.pyx", line 84, in h5py._proxy.H5PY_H5Dread
OSError: Can't read data (can't open directory)

Thanks in advance!

Requesting for IMU data

Hi,

I'm working on visual inertial odometry on the DSEC dataset. May I have the IMU data (linear linear_acceleration and angular velocity)for all the sequences under zurich_city_04 and also the calibration parameters for the Transformation matrix between all the cameras and the IMU device? Thanks in advance.

Question about the MVSEC

Thank you for your excellent work!

Now I'm working on depth estimation based on events and images.
When I create the MVSEC dataset following the work "Learning an event sequence embedding for dense event-based deep stereo", I met some problems.
Therefore I want to know could you share the final file data?
MVSEC

Thank you very much!

Disparity between event and RGB camera

Hello, thank your for providing this awesome dataset!

My understanding is that the DSEC provides two sets of disparity, one between two RGB cameras, and another between two event cameras. I am just wondering whether the disparity between event camera and RGB camera is available, or could it be computed from camera intrinsics/extrinsics?

Question regarding optical flow GT

Hi authors,

thank you for providing the great dataset.
I have a couple of general questions regarding your optical flow. Sorry if it is noted or asked before.

  1. (quick double-check) according to the data format page, it is displacement, not the velocity. So it is basically pix/100ms values. My understanding is correct?
  2. Only 18 training sequences out of 40+ have OF (for example, zurich_city_00_a, b etc does not have OF.) Is this intentional - you don't have plan to release the GT for the rest of the sequences?
  3. The training sequence file index is every 2, like 00002.png, 00004.png, etc. But the test sequence file index is every 10, like 820, 830, etc. Why is it so? Is there any difference of the time period of the displacement?

Shintaro

Event undistortion

Hello, I am looking at your great dataset.

I suppose the event raw data were not preprocessed with lens undistortion?
So I tried undistortion with the intrinsics you provided, but the result is a weird. seems like already undistorted before.
image

So just to make sure if you have already undistorted the events raw data?

Thanks!

Reproducing the Rectification Map

Hello,

I am trying to reproduce the values provided in the rectification map by utilizing the camera calibration parameters.

# xys: pixel coordinates
R = np.array(cams['extrinsics']['R_rect0'])
dist = np.array(cams['intrinsics']['cam0']['distortion_coeffs'])
k_dist = np.array(cams['intrinsics']['cam0']['camera_matrix'])
k_rect = np.array(cams['intrinsics']['camRect0']['camera_matrix'])

K_dist = np.array([[k_dist[0], 0, k_dist[2]],
                   [0, k_dist[1], k_dist[3]],
                   [0, 0, 1]])

xys_K = cv2.undistortPoints(xys, K_dist, dist).reshape(-1, 2)
xys_K = np.concatenate([xys_K, np.ones([len(xys_K), 1])], axis=1)

xys_rect = xys_K.dot(R.transpose())[:, :2] * k_rect[:2] + k_rect[2:4]

Unfortunately, my approach delivers different results. For example, for the corners of the image

[[0, 0],
 [639, 0],
 [0, 479],
 [639, 479]]

I get

[[-13.74142628  -4.9402896 ]
 [654.37295098  -4.84200129]
 [-12.39900617 491.82189056]
 [650.87186114 496.71564016]]

whereas the rectification map supplies

[[-11.992482   -3.8135486]
 [656.11633    -6.081475 ]
 [-10.26636   490.16754  ]
 [652.2097    497.89062  ]]

Is there something I missed?

Ultimately, I would like to transform coordinates from the rectified camera into the distorted camera, i.e. get the inverse of the rectification map.

Thanks!

Events and frame alignment

Hi,
I used the events from the events_left hdf file and images_rectified_left to get the events and images from the zurich_city_04_b data set. I used the rectify.h5 file to get the rectified events. But still I get the image like below,
rectified_events
May I know what transformations have to be applied on the events or the image files so we get the aligned image and the events ?

I understand that we need to convert the 2d point of an event to a 3d point and do the transformation T01 mentioned in the cam_cam.yaml file and get the 2d point on the other frame again. Could you provide an example for this?

I'm working with rosbags. Could you please share the code for getting the rosbag as mentioned in #Issue12, https://download.ifi.uzh.ch/rpg/tmp/interlaken_00_kalibr.bag. ?

Thanks in advance.

Viewpoint difference between event and RGB camera on the same side

Hello, I noticed that there is quite some viewpoint difference between the event (rectified) and RGB image (rectified) on the same side, for example the following alpha image between Cam0_rect and Cam1_rect.

This can be a problem if somebody wants to compare disparity map between event camera and RGB camera.

Personally, I think it can be solved by reprojecting the two camera's view to the same attitude (so that the view between cam0_rect and cam1_rect is completely aligned, pixel to pixel). But with the extrinsics you provide I could not achieve that. I wonder if you have tried this? is the extrinsics between event and RGB accurate enough?

Thanks a lot!
image

Regarding Rectification Map

Hi,

I found the dispaity_events GT were misaligned with the rectified event map (obtained by accumulating a number of events on the image plane without motion compensation). To double check this, I would like to confirm the follwing definition with you.

I wonder if the provided rectify_map is equivalent to that used by the cv.remap function (https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gab75ef31ce5cdfb5c44b6da5f3b908ea4).

The "rectify_map" looks like the forward mapping funciton, while the one used by the cv.remap function is the inverse mapping function, right?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.