Giter Club home page Giter Club logo

deep-asynchronous-gnn's Introduction

Low Latency Automotive Vision with Event Cameras

DAGR

This repository contains code from our 2024 Nature paper which can be accessed for free here PDF Open Access. Low Latency Automotive Vision with Event Cameras by Daniel Gehrig and Davide Scaramuzza. If you use our code or refer to this project, please cite it using

@Article{Gehrig24nature,
  author    = {Gehrig, Daniel and Scaramuzza, Davide},
  title     = {Low Latency Automotive Vision with Event Cameras},
  booktitle = {Nature},
  year      = {2024}
}

Installation

First, download the github repository and its dependencies

WORK_DIR=/path/to/work/directory/
cd $WORK_DIR
git clone [email protected]:uzh-rpg/dagr.git
DAGR_DIR=$WORK_DIR/dagr

cd $DAGR_DIR 

Then start by installing the main libraries. Make sure Anaconda (or better Mamba), PyTorch, and CUDA is installed.

cd $DAGR_DIR
conda create -y -n dagr python=3.8 
conda activate dagr
conda install -y setuptools==69.5.1 mkl==2024.0 pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

Then install the pytorch-geometric libraries. This may take a while.

bash install_env.sh

The above bash file will figure out the CUDA and Torch version, and install the appropriate pytorch-geometric packages. Then, download and install additional dependencies locally

bash download_and_install_dependencies.sh
conda install -y h5py blosc-hdf5-plugin

Finally, install the dagr package

pip install -e .

Run Example

After installing, you can download a data fragment, and checkpoint with

bash download_example_data.sh

This will download a checkpoint and data fragment of DSEC-Detection on which you can test the code. Once downloaded, run the following command

LOG_DIR=/path/to/log
DEVICE=1
CUDA_VISIBLE_DEVICES=$DEVICE python scripts/run_test_interframe.py --config config/dagr-s-dsec.yaml \
                                                                   --use_image \
                                                                   --img_net resnet50 \
                                                                   --checkpoint data/dagr_s_50.pth \
                                                                   --batch_size 8 \
                                                                   --dataset_directory data/DSEC_fragment \
                                                                   --no_eval \                                                        
                                                                   --output_directory $LOG_DIR

note the wandb directory as $WANDB_DIR and then visualize the detections with

python scripts/visualize_detections.py --detections_folder $LOG_DIR/$WANDB_DIR \
                                       --dataset_directory data/DSEC_fragment/test \
                                       --vis_time_step_us 1000 \ 
                                       --event_time_window_us 5000 \
                                       --sequence zurich_city_13_b

Test on DSEC

Start by downloading the DSEC dataset and the additional labelled data introduced in this work. To do so, follow these instructions. They are based on the scripts of dsec-det, which can be found in libs/dsec-det/scripts. To continue, complete sections Download DSEC until Test Alignment. If you already downloaded DSEC, make sure $DSEC_ROOT points to it, and instead start at section Download DSEC-extra .

After downloading all the data, change back to $DAGR_DIR, and start by downsampling the events

cd $DAGR_DIR
bash scripts/downsample_all_events.sh $DSEC_ROOT

Running Evaluation

This repository implements three scripts for running evaluation of the model on DSEC-Det. The first, evaluates the detection performance of the model after seeing one image, and the subsequent 50 milliseconds of events. To run it, specify a device, and logging directory with type

LOG_DIR=/path/to/log
DEVICE=1
CUDA_VISIBLE_DEVICES=$DEVICE python scripts/run_test.py --config config/dagr-s-dsec.yaml \
                                                        --use_image \
                                                        --img_net resnet50 \
                                                        --checkpoint data/dagr_s_50.pth \
                                                        --batch_size 8 \
                                                        --dataset_directory $DSEC_ROOT \
                                                        --output_directory $LOG_DIR

Then, to evaluate the number of FLOPS generated in asynchronous mode, run

LOG_DIR=/path/to/log
DEVICE=1
CUDA_VISIBLE_DEVICES=$DEVICE python scripts/count_flops.py --config config/eagr-s-dsec.yaml \
                                                           --use_image \
                                                           --img_net resnet50 \
                                                           --checkpoint data/dagr_s_50.pth \
                                                           --batch_size 8 \
                                                           --dataset_directory $DSEC_ROOT \
                                                           --output_directory $LOG_DIR

Finally, to evaluate the interframe detection performance of our method run

LOG_DIR=/path/to/log
DEVICE=1
CUDA_VISIBLE_DEVICES=$DEVICE python scripts/run_test_interframe.py --config config/eagr-s-dsec.yaml \
                                                                   --use_image \
                                                                   --img_net resnet50 \
                                                                   --checkpoint data/dagr_s_50.pth \
                                                                   --batch_size 8 \
                                                                   --dataset_directory $DSEC_ROOT \
                                                                   --output_directory $LOG_DIR \
                                                                   --num_interframe_steps 10

This last script will write the high-rate detections from our method into the folder $LOG_DIR/$WANDB_DIR, where $WANDB_DIR is the automatically generated folder created by wandb. To visualize the detections, use the following script:

python scripts/visualize_detections.py --detections_folder $LOG_DIR/$WANDB_DIR \
                                       --dataset_directory $DSEC_ROOT/test/ \
                                       --vis_time_step_us 1000 \ 
                                       --event_time_window_us 5000 \
                                       --sequence zurich_city_13_b
                                       

This will start a visualization window showing the detections over a given sequence. If you want to save the detections to a video, use the --write_to_output flag, which will create a video in the folder $LOG_DIR/$WANDB_DIR/visualization}.

deep-asynchronous-gnn's People

Contributors

danielgehrig18 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.