Giter Club home page Giter Club logo

navsim's Introduction

Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking

Paper | Supplementary | Talk | 2024 Challenge


NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking

Daniel Dauner1,2, Marcel Hallgarten1,5, Tianyu Li3, Xinshuo Weng4, Zhiyu Huang4,6, Zetong Yang3
Hongyang Li3, Igor Gilitschenski7,8, Boris Ivanovic4, Marco Pavone4,9, Andreas Geiger1,2, and Kashyap Chitta1,2

1University of Tübingen, 2Tübingen AI Center, 3OpenDriveLab at Shanghai AI Lab, 4NVIDIA Research
5Robert Bosch GmbH, 6Nanyang Technological University, 7University of Toronto, 8Vector Institute, 9Stanford University


Highlights

🔥 NAVSIM gathers simulation-based metrics (such as progress and time to collision) for end-to-end driving by unrolling simplified bird's eye view abstractions of scenes for a short simulation horizon. It operates under the condition that the policy has no influence on the environment, which enables efficient, open-loop metric computation while being better aligned with closed-loop evaluations than traditional displacement errors.

Table of Contents

  1. Highlights
  2. Getting started
  3. Changelog
  4. License and citation
  5. Other resources

Getting started

(back to top)

Changelog

  • [2024/04/21] NAVSIM v1.0 release (official devkit version for AGC 2024)
  • [2024/04/03] NAVSIM v0.4 release
    • Support for test phase frames of competition
    • Download script for trainval
    • Egostatus MLP Agent and training pipeline
  • [2024/03/25] NAVSIM v0.3 release (official devkit version for warm-up phase)
    • Adds code for Leaderboard submission
  • [2024/03/11] NAVSIM v0.2 release
    • Easier installation and download
    • mini and test data split integration
    • Privileged Human agent
  • [2024/02/20] NAVSIM v0.1 release (initial demo)
    • OpenScene-mini sensor blobs and annotation logs
    • Naive ConstantVelocity agent

(back to top)

License and citation

All assets and code in this repository are under the Apache 2.0 license unless specified otherwise. The datasets (including nuPlan and OpenScene) inherit their own distribution licenses. Please consider citing our paper and project if they help your research.

@article{Dauner2024ARXIV,
    title = {NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking},
    author = {Daniel Dauner and Marcel Hallgarten and Tianyu Li and Xinshuo Weng and Zhiyu Huang and Zetong Yang and Hongyang Li and Igor Gilitschenski and Boris Ivanovic and Marco Pavone and Andreas Geiger and Kashyap Chitta},
    journal = {arXiv},
    volume = {2406.15349},
    year = {2024}
} 
@misc{Contributors2024navsim,
    title={NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking},
    author={NAVSIM Contributors},
    howpublished={\url{https://github.com/autonomousvision/navsim}},
    year={2024}
} 

(back to top)

Other resources

Twitter Follow Twitter Follow Twitter Follow Twitter Follow

(back to top)

navsim's People

Contributors

danieldauner avatar kashyap7x avatar mh0797 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

navsim's Issues

Log Pickle Loading Fails

When I run ./run_cv_pdm_score_evaluation.sh, there is a bug when loading log file. I use the pickle files you provide.
image

Error while loading data for transfuser

Hello, while attempting to run transfuser (using ./run_transfuser_training.sh), I am getting the following error :

Traceback (most recent call last):
  File "/mnt/disks/data/sim2real/CVPR-challenge/navsim/navsim/navsim/planning/script/run_training.py", line 119, in main
    trainer.fit(
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 105, in launch
    return function(*args, **kwargs)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1033, in _run_stage
    self.fit_loop.run()
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
    self.advance()
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
    self.advance(data_fetcher)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 212, in advance
    batch, _, __ = next(data_fetcher)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/fetchers.py", line 133, in __next__
    batch = super().__next__()
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/loops/fetchers.py", line 60, in __next__
    batch = next(self.iterator)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/utilities/combined_loader.py", line 341, in __next__
    out = next(self._iterator)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/pytorch_lightning/utilities/combined_loader.py", line 78, in __next__
    out[i] = next(self.iterators[i])
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 635, in __next__
    data = self._next_data()
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 679, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 56, in fetch
    return self.collate_fn(data)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 267, in default_collate
    return collate(batch, collate_fn_map=default_collate_fn_map)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 142, in collate
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 142, in <listcomp>
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 127, in collate
    return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 127, in <dictcomp>
    return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 119, in collate
    return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
  File "/opt/conda/envs/navsim/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 164, in collate_tensor_fn
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 256, 1024] at entry 0 and [1, 256, 1024] at entry 2

I was getting an error stating : RuntimeError: Trying to resize storage that is not resizable, when i had number of workers as 4 with a prefetch factor 2, so i modified it to 0 and none. Is there a fix you could suggest for this?

Questions about Navtrain Dataset and warmup_test

Hi everyone, I notice that there was an update in the navsim repository including dataset, new baseline, visualizations, thanks for the great work. I have a question about the navtrain dataset, when I downloaded it, it seemed there is no current_split_5 and history_split_5, which included in the download script(as shown below). Did I miss something or there is no such data? And there is no corrresponding pkl file, which means we can't use it for our training, will it be provided later?

image

Another Question is that privously I split the mini set into train, val, test for small testing, and upload the trained model on the warmup track. But I notice that you said that the warmup_test_e2e is a scene filter for the mini split. So can I ask which part of the mini set you use for the warmup track?

Thanks a lot for answering my question!

About required environment variables

Hi! Thank you for this incredible work! I'd like to clarify the proper method of setting up the necessary environment variables according to your README documentation located here: https://github.com/autonomousvision/navsim/blob/main/docs/install.md#2-install-the-navsim-devkit.

After following your guide to prepare the dataset and code, I have reached the stage where I need to define the paths for the environment variables. However, I am uncertain about the specific directories to use for each of the following:

export NAVSIM_DEVKIT_ROOT=/path/to/navsim/devkit  # Should this be the path to the root directory of my navsim installation?
export NUPLAN_EXP_ROOT=/path/to/navsim/exp # Is this the path pointing to the 'navsim_logs' folder that I've downloaded?
export NUPLAN_MAPS_ROOT=/path/to/nuplan/maps # The path to the downloaded 'nuplan-maps-v1.0' folder?
export OPENSCENE_DATA_ROOT=/path/to/openscene # Do I need to acquire a separate dataset specifically for this variable?

Your guidance in specifying these paths would be greatly appreciated!

Question about driving direction compliance

Hi! I find this line of code calculating the Driving Direction Compliance metric a bit confusing:

oncoming_progress_over_horizon = np.array(
[
sum(oncoming_progress[max(0, time_idx - horizon) : time_idx + 1])
for time_idx in range(len(oncoming_progress))
],
dtype=np.float64,
)
It seems oncoming_progress is of shape [Num_proposals, 1 + Num_poses], but you slice its first dimension with some variables like time and horizons? So you are calculating the sum of different proposals? I don't understand, please help me, thanks!

About run_metric_caching.sh

In this file cache file is written like this
cache.cache_path=/home/aah1si/openscene/exp/public_test_metric_cache
I think it should use exported values instead of /home/aah1si
So wouldn't this be better?
cache.cache_path=$NAVSIM_EXP_ROOT/metric_cache

Aligning lidar points‘ coordinates across different frames?

Currently, I am using ego2global transformation matrix to warp LiDAR points’ in the history to the current key frame. If this matrix isn’t available in the final test set, it might be difficult to do so. Can you provide the transformation matrix between different frames for the test set?

About download_navtrain.sh

Hi, I would like to download the navtrain datasets by running the code download_navtrain.sh.
However, I got following outputs while downloading it.

mv: cannot move 'history_split_1/2021.06.09.14.03.17_veh-12_00159_00283' to 'trainval_sensor_blobs/trainval/2021.06.09.14.03.17_veh-12_00159_00283': Directory not empty

Note: This output was obtained for all the data in the history_split_1 directory. The one shown is just an example.

Upon checking the download logs, it seems that the data included in history_split might already be obtained when downloading the current_split.

Question about the private test set

Hi, I've downloaded the private test set downloaded from https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_metadata_private_test_e2e.tgz and https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_sensor_private_test_e2e.tgz according to your script. It seems that this private set is mainly based on the previous test set from https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_metadata_test.tgz?

The previous test set contains annotations like bboxes and HD map according to OpenScene. I feel like asking how can we maintain the integrity of the challenge if these data are visible to participants?

Test error on private_test_e2e dataset

Hello, I would like to ask what I have no problem testing on the mini dataset, but it is wrong to test on the private_test_e2e dataset?
image
(navsim) tongyy@server:~/TYY/navsim/scripts$ ./evaluation/run_transfuer.sh
2024-05-23 15:49:00,634 INFO {/mnt/datadisk/tongyy/TYY/navsim/navsim/planning/script/builders/worker_pool_builder.py:20} Building WorkerPool...
2024-05-23 15:49:00,770 INFO {/mnt/datadisk/tongyy/TYY/navsim/navsim/planning/utils/multithreading/worker_ray_no_torch.py:51} Not using GPU in ray
2024-05-23 15:49:00,770 INFO {/mnt/datadisk/tongyy/TYY/navsim/navsim/planning/utils/multithreading/worker_ray_no_torch.py:77} Starting ray local!
2024-05-23 15:49:02,660 INFO worker.py:1749 -- Started a local Ray instance.
2024-05-23 15:49:03,740 INFO {/mnt/datadisk/tongyy/.conda/envs/navsim/lib/python3.9/site-packages/nuplan/planning/utils/multithreading/worker_pool.py:101} Worker: RayDistributedNoTorch
2024-05-23 15:49:03,740 INFO {/mnt/datadisk/tongyy/.conda/envs/navsim/lib/python3.9/site-packages/nuplan/planning/utils/multithreading/worker_pool.py:102} Number of nodes: 1
Number of CPUs per node: 136
Number of GPUs per node: 0
Number of threads across all nodes: 136
2024-05-23 15:49:03,740 INFO {/mnt/datadisk/tongyy/TYY/navsim/navsim/planning/script/builders/worker_pool_builder.py:31} Building WorkerPool...DONE!
Loading logs: 0%| | 0/1 [00:00<?, ?it/s]
Error executing job with overrides: ['agent=transfuser_agent', 'agent.checkpoint_path=/mnt/datadisk/tongyy/TYY/navsim/exp/training_transfuser_agent/2024.05.16.20.42.58/lightning_logs/version_0/checkpoints/epoch19.ckpt', 'experiment_name=transfuser_agent_eval', 'split=private_test_e2e']
Traceback (most recent call last):
File "/mnt/datadisk/tongyy/TYY/navsim/navsim/planning/script/run_pdm_score.py", line 46, in main
scene_loader = SceneLoader(
File "/mnt/datadisk/tongyy/TYY/navsim/navsim/common/dataloader.py", line 83, in init
self.scene_frames_dicts = filter_scenes(data_path, scene_filter)
File "/mnt/datadisk/tongyy/TYY/navsim/navsim/common/dataloader.py", line 50, in filter_scenes
and len(frame_list[scene_filter.num_history_frames - 1]["roadblock_ids"]) == 0
TypeError: object of type 'NoneType' has no len()

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

The impact of caching to training

Hi, when i move to the whole dataset and use run_dataset_caching to cache the trainval data, it takes almost 16 hours to cache all the data. For features, I only use ego_state and front images by now. I am not sure how long it takes when i move to all the images and all the lidar data. And it also means when i want to change my input, i will have to cache all the data again. So I want to ask are there anyway to accelerate the caching process, or are there anyway that i can directly start training instead of caching the data for more than one day (probably without losing too much performance)? Did I miss something? below is the script I use when cache the data:

image

by the way I have num_workers = 96, and also with GPU memory = 24GB * 2, are there anyway I can fully make use of it to acc the training process?

Thanks in advance

How to visualize planning results (as videos)

Thanks for your great benchmark. Now I can run_transfuser.sh on miniset. I'd like to ask how to visualize planning results as videos for each scenario. Great thanks if you would help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.