Giter Club home page Giter Club logo

carla_garage's Introduction

CARLA garage

PWC PWC PWC

Hidden Biases of End-to-End Driving Models
Bernhard Jaeger, Kashyap Chitta, Andreas Geiger
International Conference on Computer Vision (ICCV), 2023

This repo contains the code for the paper Hidden Biases of End-to-End Driving Models .
We provide clean, configurable code with documentation as well as pre-trained weights with strong performance.
The repository can serve as a good starting point for end-to-end autonomous driving research on CARLA.

Contents

  1. Setup
  2. Pre-Trained Models
  3. Evaluation
  4. Dataset
  5. Data generation
  6. Training
  7. Additional Documenation
  8. Citation

Setup

Clone the repo, setup CARLA 0.9.10.1, and build the conda environment:

git clone https://github.com/autonomousvision/carla_garage.git
cd carla_garage
chmod +x setup_carla.sh
./setup_carla.sh
conda env create -f environment.yml
conda activate garage

Before running the code, you will need to add the following paths to PYTHONPATH on your system:

export CARLA_ROOT=/path/to/CARLA/root
export WORK_DIR=/path/to/carla_garage
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg
export SCENARIO_RUNNER_ROOT=${WORK_DIR}/scenario_runner
export LEADERBOARD_ROOT=${WORK_DIR}/leaderboard
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":${PYTHONPATH}

You can add this in your shell scripts or directly integrate it into your favorite IDE.
E.g. in PyCharm: Settings -> Project -> Python Interpreter -> Show all -> garage (need to add from existing conda environment first) -> Show Interpreter Paths -> add all the absolute paths above (without pythonpath).

Pre-Trained Models

We provide a set of pretrained models here. The models are licensed under CC BY 4.0. These are the final model weights used in the paper, the folder indicates the benchmark. For the training and validation towns, we provide 3 models which correspond to 3 different training seeds. The format is approach_trainingsetting_seed. Each folder has an args.txt containing the training settings in text, a config.pickle containing all hyperparameters for the code and a model_0030.pth containing the model weights. Additionally, there are training logs for most models.

Evaluation

To evaluate a model, you need to start a CARLA server:

cd /path/to/CARLA/root
./CarlaUE4.sh -opengl

Afterward, run leaderboard_evaluator_local.py as the main python file. It is a modified version of the original leaderboard_evaluator.py which has the configurations used in the benchmarks we consider and additionally provides extra logging functionality.

Set the --agent-config option to a folder containing a config.pickle and model_0030.pth.
Set the --agent to sensor_agent.py.
The --routes option should be set to lav.xml or longest6.xml.
The --scenarios option should be set to eval_scenarios.json for both benchmarks.
Set --checkpoint to /path/to/results/result.json

To evaluate on a benchmark, set the respective environment variable: export BENCHMARK=lav or export BENCHMARK=longest6.
Set export SAVE_PATH=/path/to/results to save additional logs or visualizations

Models have inference options that can be set via environment variables. For the longest6 model you need to set export UNCERTAINTY_THRESHOLD=0.33, for the LAV model export STOP_CONTROL=1 and for the leaderboard model export DIRECT=0. Other options are correctly set by default.
For an example, you can check out local_evaluation.sh.

After running the evaluation, you need to parse the results file with result_parser.py. It will recompute the metrics (the initial once are incorrect) compute additional statistics and optionally visualize infractions as short video clips.

python ${WORK_DIR}/tools/result_parser.py --xml ${WORK_DIR}/leaderboard/data/lav.xml --results /path/to/results --log_dir /path/to/results

The result parser can optionally create short video/gif clips showcasing re-renderings of infractions that happened during evaluation. The code was developed by Luis Winckelmann and Alexander Braun. To use this feature, you need to prepare some map files once (that are too large to upload to GitHub). For that, start a CARLA sever on your computer and run prepare_map_data.py. Afterward, you can run the feature by using the --visualize_infractions flag in result_parser.py. The feature requires logs to be available in your results folder, so you need to set export SAVE_PATH=/path/to/results during evaluation.

How to actually evaluate

The instructions above are what you will be using to debug the code. Actually evaluating challenging benchmarks such as longest6, that have over 108 long routes, is very slow in practice. Luckily, CARLA evaluations are embarrassingly parallel. Each of the 108 routes can be evaluated independently in parallel. That means if you have 2 GPUs you can evaluate 2x faster, if you have 108 GPUs you can evaluate 108x faster. While using the same amount of overall compute. To do that, you need access to a scalable cluster system and some scripts to parallelize. We are using SLURM at our institute. To evaluate a model, we are using the script evaluate_routes_slurm.py. It is intended to be run inside a tmux on an interactive node and will spawn evaluation jobs (up till the number set in max_num_jobs.txt). It also monitors the jobs and resubmits jobs where it detected a crash. In the end, the script will run the result parser to aggregate the results. If you are using a different system, you can use this as guidance and write your own script. The CARLA leaderboard benchmarks are the most challenging in the driving scene right now, but if you don't have access to multiple GPUs you might want to use simulators that are less compute intensive for your research. NuPlan is a good option, and our group also provides strong baselines for nuPlan.

Dataset

We released the dataset we used to train our final models. The dataset is licensed under CC BY 4.0. You can download it using:

cd /path/to/carla_garage/tools
bash download_data.sh

The script will download the data to /path/to/carla_garage/data. This is also the path you need to set --root_dir to for training. The script will download and unzip the data with 11 parallel processes. The download is roughly 350 GB large (will be a bit more after unzipping).

Data Generation

Dataset generation is similar to evaluation. You can generate a dataset by changing the --agent option to data_agent.py and the --track option to MAP. In addition, you need to set the following environment flags:

export DATAGEN=1
export BENCHMARK=collection
export CHECKPOINT_ENDPOINT=/path/to/dataset/Routes_{route}_Repetition{repetition}/Dataset_generation_{route}_Repetition{repetition}.json
export SAVE_PATH=/path/to/dataset/Routes_{route}_Repetition{repetition}

Again it is too slow to generate our dataset with a single computer, you should be using multiple GPUs. We provide a python script for SLURM clusters, it works in the same fashion as the evaluation script. We will release the dataset we used at a later point.

Training

Agents are trained via the file train.py. Examples how to use it are provided for shell and SLURM. You need to activate garage conda environment before running it. It first sets the relevant environment variables and then launches the training with torchrun. Torchrun is a pytorch tool that handles multi-gpu training. If you want to debug on a single gpu simply set --nproc_per_node=1. The training script has many options to configure your training you can list them with python train.py --help or look through the code. The most important once are:

--id your_model_000 # Name of your experiment
--batch_size 32 # Batch size per GPU
--setting all # Which towns to withhold during training. Use 'all' for leaderboard, longest6 and '02_05_withheld' for LAV models.
--root_dir /path/to/dataset # Path to the root_dir of your dataset
--logdir /path/to/models # Root dir where the training files will be stored
--use_controller_input_prediction 1 # Whether your model trains with a classification + path prediction head
--use_wp_gru 0 # Whether you model trains with a waypoint head.
--use_discrete_command 1 # Whether to use the navigational command as input to the model
--use_tp 1  # Whether to use the target point as input to your model
--cpu_cores 20 # Total number of cpu cores on your machine
--num_repetitions 3 # How much data to train on (Options are 1,2,3). 1x corresponds to 185k in Table 5, 3x corresponds to 555k

Additionally, to do the two stage training from Table 4 you need the --continue_epoch and --load_file option. You need to train twice. First train a model with set --use_wp_gru 0 and --use_controller_input_prediction 0, this will only train the perception backbone with auxiliary losses. Then, train a second model, set e.g. --use_controller_input_prediction 1, --continue_epoch 0 and --load_file /path/to/stage1/model_0030.pth. The load_file option is usually used to resume a crashed training, but with --continue_epoch 0 the training will start from scratch with the pre-trained weights used for initialization.

Training in PyCharm

You can also run and debug torchrun in PyCharm. To do that you need to set your run/debug configuration as follows:
Set the script path to: /path/to/train.py
Set the interpreter options to:

-m torch.distributed.run --nnodes=1 --nproc_per_node=1 --max_restarts=0 --rdzv_id=123456780 --rdzv_backend=c10d

Training parameters should be set in the Parameters: field and environment variable in Environment Variables:. Additionally, you need to set up conda environment (variables) as described above.

Submitting to the CARLA leaderboard

To submit to the CARLA leaderboard, you need docker installed on your system (as well as the nvidia-container-toolkit to test it). Create the folder team_code/model_ckpt/transfuser. Copy the model.pth files and config.pickle that you want to evaluate to team_code/model_ckpt/transfuser. If you want to evaluate an ensemble, simply copy multiple .pth files into the folder, the code will load all of them and ensemble the predictions. Edit the environment paths at the top of tools/make_docker.sh and then:

cd tools
./make_docker.sh

The script will create a docker image with the name transfuser-agent. Before submitting, you should locally test your image. To do that, start up a CARLA server on your computer (it will be able to communicate with the docker container via ports). Then start your docker container. An example is provided in run_docker.sh. Inside the docker container start your agent using:

cd leaderboard
cd scripts
bash run_evaluation.sh

You can stop the evaluation, after confirming that there is no issue, using "ctrl + c + ".
To submit, follow the instructions on the leaderboard to make an account and install alpha.

alpha login
alpha benchmark:submit  --split 3 transfuser-agent:latest

The command will upload the docker image to the cloud and evaluate it.

Additional Documentation

  • Coordinate systems in CARLA repositories are usually a big mess. In this project, we addressed this by changing all data into a unified coordinate frame. Further information about the coordinate system can be found here.

  • The TransFuser model family has grown quite a lot with different variants, which can be confusing for new community members. The history file explains the different versions and which paper you should cite to refer to them.

  • Building a full autonomous driving stack involves quite some engineering. The documentation explains some of the techniques and design philosophies we used in this project.

  • The codebase can run any experiment presented in the paper. It also supports some additional features that we did not end up using. They are documented here.

Contact

If you have any questions or suggestions, please feel free to open an issue or contact us at [email protected].

Citation

If you find CARLA garage useful, please consider giving us a star 🌟 and citing our paper with the following BibTeX entry.

@InProceedings{Jaeger2023ICCV,
  title={Hidden Biases of End-to-End Driving Models},
  author={Bernhard Jaeger and Kashyap Chitta and Andreas Geiger},
  booktitle={Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
  year={2023}
}

Acknowledgements

Open source code like this is build on the shoulders of many other open source repositories. In particularly we would like to thank the following repositories for their contributions:

We also thank the creators of the numerous pip libraries we use. Complex projects like this would not be feasible without your contribution.

carla_garage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carla_garage's Issues

Clarification: Command

In the dataset (measurement.json) and in the transfuser_utils.py "command" is being used. There are 6 categories: [0, 1, 2, 3, 4, 5].

Can anyone help answer some clarifying questions?

  1. Does the "command" here correspond to the command given for turns? Similar to End-to-end Driving via Conditional Imitation Learning by Codevilla et al. ?
  2. What are the 6 categories for the command? Which does each of these classes ([0, 1, 2, 3, 4, 5]) correspond to?
  3. Is "target_point" used in conjunction with "command"? (assuming "command" serves a similar function to that in Codevilla CIL paper) If so, why? Is that not redundant information?
  4. How is the "command" labeled/generated in the dataset?

Thanks!

bbox position for red light

Hello, when the car approaches the junction, the red light is far away from the ego vehicle, but the red bbox for red light is just in front of the ego vehicle. I didn't find the reason in the code.
Can you please explain why? Thank you!

Training Time

Thank you for your team's work. It may be because the issue from transfuser repository mentioned that the model only needs to be trained for 1 day with 8 2080ti, so I thought that the training of transfuser++ with little architectural change would only take about the same time, but when I use an a100(40G) to train the NC model, it seems that it takes 6 days. I want to know the specific training time of transfuser++, it doesn’t seem to be mentioned in the paper.

RuntimeError: Trying to resize storage that is not resizable

Hi, I downloaded the dataset using the script provided and was trying to reproduce the results by training the model. While training, an error occurred, and I am not sure what is causing this. Would really appreciate the help.

Root Cause (first observed failure):
[0]:
  time      : 2023-10-12_15:09:48
  host      : scc-204.scc.bu.edu
  rank      : 2 (local_rank: 2)
  exitcode  : 1 (pid: 1566171)
  error_file: /scratch/1977161.1.academic-gpu/torchelastic_fzaatik1/42353467_ccys4nyt/attempt_0/2/error.json
  traceback : Traceback (most recent call last):
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
      return f(*args, **kwargs)
    File "train.py", line 625, in main
      trainer.train(epoch)
    File "train.py", line 884, in train
      for i, data in enumerate(tqdm(self.dataloader_train, disable=self.rank != 0, ascii=True, desc=f"Epoch: {epoch}")):
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/tqdm/std.py", line 1183, in __iter__
      for obj in iterable:
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
      data = self._next_data()
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1356, in _next_data
      return self._process_data(data)
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
      data.reraise()
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/_utils.py", line 461, in reraise
      raise exception
  RuntimeError: Caught RuntimeError in DataLoader worker process 5.
  Original Traceback (most recent call last):
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
      data = fetcher.fetch(index)
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
      return self.collate_fn(data)
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in default_collate
      return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in <dictcomp>
      return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 149, in default_collate
      return default_collate([torch.as_tensor(b) for b in batch])
    File "/projectnb/rlvn/students/hipatil/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 140, in default_collate
      out = elem.new(storage).resize_(len(batch), *list(elem.size()))
  RuntimeError: Trying to resize storage that is not resizable

Is bb_detected_in_front_of_vehicle in sensor_agent.py be used?

Hello, I wondering if bb_detected_in_front_of_vehicle is used in sensor_agent.py. If not, the agent is totally controled by the model output, not considering the red light and vehicle bbox detected by ther model, is it so?
I found some red light infraction and vehicle collison of my reproduced model.
Thank you.

Missing point cloud in dataset

Hi, I used the script provided to download the dataset and run the training code. There seems to be one point cloud missing in the data though, it's pretty hard for me to pinpoint which one:

$ torchrun --nnodes=1 --nproc_per_node=2 --max_restarts=1 --rdzv_id=42353467 --rdzv_backend=c10d train.py --id train_id_000 --batch_size 8 --setting 02_05_withheld --root_dir /media/ssd/users/yasasa/data2/ --logdir ./ --use_tp 1 --continue_epoch 1 --cpu_cores 4 --num_repetitions 3 --backbone aim --use_semantic 0 --use_depth 0 --use_semantic 0 --use_bev_semantic 0 --detect_boxes 0
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Start method of multiprocessing: fork
Start method of multiprocessing: fork
RANK, LOCAL_RANK and WORLD_SIZE in environ: 0/0/2
RANK, LOCAL_RANK and WORLD_SIZE in environ: 1/1/2
Rank: 1 Device: cuda:1 Num GPUs on node: 3 Num CPUs on node: 4 Num workers: 1
Setting:  02_05_withheld
Rank: 0 Device: cuda:0 Num GPUs on node: 3 Num CPUs on node: 4 Num workers: 1
Setting:  02_05_withheld
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 159/159 [00:11<00:00, 14.00it/s]
Loading 541944 lidars from 159 folders
Total amount of routes: 6919
Crashed routes: 1
Perfect routes: 6538
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 51/51 [00:04<00:00, 12.68it/s]
0it [00:00, ?it/s]
Loading 0 lidars from 0 folders
Total amount of routes: 0
Crashed routes: 0
Perfect routes: 0
Loading 208536 lidars from 51 folders
Total amount of routes: 2202
Crashed routes: 0
Perfect routes: 2142
Target speed weights:  [0.866605263873406, 7.4527377240841775, 1.2281629310898465, 0.5269622904065803]
Angle weights:  [204.25901201602136, 7.554315623148331, 0.21388916461734406, 5.476446162657503, 207.86684782608697]
0it [00:00, ?it/s]
Loading 0 lidars from 0 folders
Total amount of routes: 0
Crashed routes: 0
Perfect routes: 0
Total trainable parameters:  27949776
Adjusting learning rate of group 0 to 3.0000e-04.
Adjusting learning rate of group 0 to 3.0000e-04.
 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–                       | 29105/33871 [2:37:26<25:46,  3.08it/s]
Traceback (most recent call last):
  File "train.py", line 1019, in <module>
    main()
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
    return f(*args, **kwargs)
  File "train.py", line 624, in main
    trainer.train()
  File "train.py", line 883, in train
    for i, data in enumerate(tqdm(self.dataloader_train, disable=self.rank != 0)):
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
    data = self._next_data()
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
    return self._process_data(data)
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
    data.reraise()
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/_utils.py", line 461, in reraise
    raise exception
laspy.errors.LaspyException: Caught LaspyException in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/yasasa/carla_garage/team_code/data.py", line 365, in __getitem__
    las_object = laspy.read(str(lidars[i], encoding='utf-8'))
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/laspy/lib.py", line 186, in read_las
    with open_las(source, closefd=closefd, laz_backend=laz_backend) as reader:
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/laspy/lib.py", line 117, in open_las
    return LasReader(stream, closefd=closefd, laz_backend=laz_backend)
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/laspy/lasreader.py", line 49, in __init__
    self.header = LasHeader.read_from(source)
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/laspy/header.py", line 525, in read_from
    stream = io.BytesIO(cls._prefetch_header_data(stream))
  File "/home/yasasa/miniconda3/envs/garage/lib/python3.7/site-packages/laspy/header.py", line 820, in _prefetch_header_data
    raise LaspyException(f"Source is empty")
laspy.errors.LaspyException: Source is empty

About docker configuration.

Hi authors, Thank you for your contribution!
I noticed that you are also performing Carla Leaderboard benchmark recently. Congratulations on refreshing your score again. I have a question about docker configuration and would like to ask you:
When I run "make_docker.sh" using the "Dockerfile.master" you provided on T-PAMI 22, it throws an error: ERROR: failed to solve: nvidia/cuda:10.2-cudnn7-devel-ubuntu16.04: docker .io/nvidia/cuda:10.2-cudnn7-devel-ubuntu16.04: not found.
I think it's because the Ubuntu16.04 image has been removed, but I can't find a solution.

The performance gap between TF++(path + target speed) and TF++ WP(Waypoint).

Hi authors, i would like to discuss with you about the two output representations of TF++ and TF++ WP. TF++ presents a novel approach to transforming the output representations of path and target speed into control signals. It solves the problem that the steering value disappears when the vehicle is stopped. I observed that in the Longest6 benchmark, vehicles often stop at corners due to heavy traffic.

In my understanding, the path prediction remains constant, even though the traffic conditions vary. This implies that at each location on the map, the vehicle maintains a fixed steering angle, unaffected by the target speed. However, in practice, a vehicle's steering angle is highly influenced by its speed, particularly at higher speeds. During training and evaluation, the agent encounters diverse traffic conditions, each associated with distinct target speeds.

I noticed that in your paper, you compared the performance of TF++WP and TF++ methods on the LAV benchmark and obtained similar results (49DS and 50DS). As shown in Table 3 in the paper:
image
At the same time, I also noticed that in the CARLA leaderboard benchmark, there is a big gap between TF++ and TF++ WP (TF++ DS 52.816, TF++WP DS 61.570, TF++WP Ensemble DS 66.317).

Is it normal for these two methods to have wildly different results on different benchmarks? Also, I am very interested in whether you have evaluated TF++WP on the Longest6 benchmark, and if so, can you tell me the results?

Thank you very much for your exceptional work!

config.pickle file Empty

Hi , First of all thanks for the sharing the code and pre-trained weights of TF++.

I am trying to run the TF++ Sensor Agent with pre-trained weights downloaded from the google drive link given. It seems that the config.pickle file is empty in all the folders. The size of the config.pickle file is 0 bytes. Can you help me with this ?

Thanks in advance !

About data collection

Hi, authors. Thanks for your great work!
I want to collection data by using generate_dataset_slurm.py script. After I modified "code_root" and "carla_root" I ran the script but it raised an error:

Starting to generate data
Number of jobs: 210
0/5 jobs are running...
Submitting job 0/210: /home/chenzesong/CarlaDataset/hb_dataset_v08_2023_05_10_logs/run_files/job_files/0.sh
/bin/sh: 1: squeue: not found
/bin/sh: 1: sbatch: not found
Traceback (most recent call last):
File "/home/chenzesong/Carla_test/Longest6_2023/carla_garage/generate_dataset_slurm.py", line 375, in
main()
File "/home/chenzesong/Carla_test/Longest6_2023/carla_garage/generate_dataset_slurm.py", line 303, in main
jobid = subprocess.check_output(f'sbatch {job_file}', shell=True).decode('utf-8').strip().rsplit(' ', maxsplit=1)[-1]
File "/home/chenzesong/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/home/chenzesong/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'sbatch /home/chenzesong/CarlaDataset/hb_dataset_v08_2023_05_10_logs/run_files/job_files/0.sh' returned non-zero exit status 127.

I've tried looking for a solution but haven't been able to so far, can you give me some advice? Thank you very much!

is it possible finetune or retrain the pretened models by using Reinforcement learning, if yes, could you please give some guidelines or references

I am deeply interested in your work, I would like to contribute, because it matches my research field. I successfully ran and evaluated the pretened models on Carla 0.9.14 and leaderboard version 2. I was thinking retrain on top of the pretrained models by using RL such as DQN, PPO, SAC. I have done some RL involved work before but never trained on top of a model that is trained imitation learning

birdview_map.py issue

Hello, I tried to generate town12.h5 map. When I run birdview_map.py. I got the error below.

Traceback (most recent call last):
File "./birds_eye_view/birdview_map.py", line 314, in
dict_masks = MapImage.draw_map_image(world.get_map(), pixels_per_meter)
File "./birds_eye_view/birdview_map.py", line 42, in draw_map_image
road_surface = pygame.Surface((width_in_pixels, width_in_pixels))
pygame.error: Out of memory

Carla version is 0.9.15.

Do you know how to resolve this issue?
Thank you.

Confused about implementation of GlobalRoutePlanner

During the process of debugging the interpolate_trajectory code, I found that there are often some duplicate waypoints (usually corresponding to different RoadOptions) in the interpolated_trace obtained from GlobalRoutePlanner. After reading the implementation of trace_route in GlobalRoutePlanner, I found that this may be because when routes contain multiple sub-routes, the exit_waypoint of the route will be calculated twice. Is this design intentional and helpful for navigation?

Questions about reproducing Interfuser and TCP.

Thank you for your great job!
I noticed in the appendix that you tried to reproduce the results of Interfuser and TCP on the Carla leaderboard. How did you reproduce it? Do you use their pre-trained weights directly or use their code to retrain on your dataset.

Why we need different setting for different benchmark?

Thanks for such wonderful work and release the corresponding code, datasets and model.

I do not understand why we need different setting for different benchmark? just as the readme
'''
Models have inference options that can be set via environment variables. For the longest6 model you need to set export UNCERTAINTY_THRESHOLD=0.33, for the LAV model export STOP_CONTROL=1 and for the leaderboard model export DIRECT=0
'''

Does it mean for different benchmark, we use the different method, furthermore, dose it mean the method which work well on Longest6 do not work so well on LAV? What are the meaning and difference between these setting?

Thank you!

RuntimeError: stack expects each tensor to be equal size, but got [2β”‚ 56, 256] at entry 0 and [1, 11] at entry 11

Hi, I am trying to reproduce the results by training the model. However, while training, an error occurred, and I am uncertain about the cause. Could you please suggest any possible solutions?

Traceback (most recent call last):
File "train.py", line 1019, in
main()
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/distributed/elastic/multiprocessing/errors/init.py", line 345,
in wrapper
return f(*args, **kwargs)
File "train.py", line 624, in main
trainer.train()
File "train.py", line 883, in train
for i, data in enumerate(tqdm(self.dataloader_train, disable=sel
f.rank != 0)):
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/tqdm
/std.py", line 1183, in iter
for obj in iterable:
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/dataloader.py", line 681, in next
data = self._next_data()
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopItera
tion
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/_utils/collate.py", line 160, in default_collate
return elem_type({key: default_collate([d[key] for d in batch])
for key in elem})
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/_utils/collate.py", line 160, in
return elem_type({key: default_collate([d[key] for d in batch])
for key in elem})
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/_utils/collate.py", line 149, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "/data/anaconda3/envs/garage/lib/python3.7/site-packages/torc
h/utils/data/_utils/collate.py", line 141, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [2
56, 256] at entry 0 and [1, 11] at entry 11

About reproducing perception PlanT

Hi, love the work from your team.

I would like to re-train perception PlanT from the codebased, however I found that setting "use_plant = True" in config.py can only train PlanT without perception part. Are there something that I missed?

Leaderboard 2.0

Is the code compatible with Leaderboard 2.0? The documentation refers to Carla 0.9.10.1, but I see that your team has a Leaderboard 2.0 submission as well.

I'm looking for state-of-the-art AV agents for Carla 0.9.14 to use in our ongoing work on scenario-based testing of AV stacks. It would be great if I can use TF++. Thank you!

How to change camera positions before evaluating

I am trying to evaluate transfuser on a bigger vehicle. (carlaCola), If it try to use the default settings with the car model changed only, I get pretty bad results, the truck wanders off the road and fails to perform basic maneuvers. I think this is because the camera positions were set according to a sedan, and i'll have to change it for the truck.

I see some relevant code in scenario_manager_local.py>setup_sensors.py, but can't figure out how to change the positions, also how to visualize what the camera is looking at, before going on with it.

and... If someone can comment on whether a model trained using a car autopilot would work on a bigger vehicle or not. (i dont see why it wouldn't given that both can follow a bicycle model).

Assert Image_seq_len==1

I notice that the Image_seq_len is assert as 1. I'd like to konw if the author have try the image_seq_len as other number and what's the performance. Meanwhile, I also like to konw if I want change the image_seq_len, what should I notice?

Question about inference speed

Hi,

I've been running some experiments and noticed something curious. While I measured a model inference speed of around 0.05 seconds on an RTX 4080, there’s an additional latency of about 0.05 seconds during the CARLA world tick. I suspect this is because of the fixed time step set for the CARLA world.

When I increase the framerate of the CARLA simulation to more than 20Hz in leaderboard_evaluator_local.py, the evaluation fails with an error message saying,

"RuntimeError: A sensor took too long to send their data."

This makes me think that all sensors need to send their measurements within 0.05 seconds. I’ve tried adjusting the sensor reading frequency, but that didn’t help.

I’m curious why this requirement exists and would appreciate any insights. Also, I’d love to know if there are any ways to reduce the time taken in the tick function.

Thanks for your help!

# in scenario_manager_local.py -> _tick_scenario() function

        time1 = time.time()
        if self._running and self.get_running_status():
            CarlaDataProvider.get_world().tick(self._timeout)
        time2 = time.time()
        print("Time tick world: ", time2 - time1)

Correct way to test NC conditioned model

I tried running Transfuser++ with use_tp = False and use_discrete_command=True, but the ego-vehicle kept turning right (until it ran into some obstacles) and wouldn't follow the route. Any ideas about this?

Has NC TF been released?

Hello, this is very helpful work. I would like to ask if the published code contains the implementation of NC conditioned model of TF in the paper. I want to do some extension work, thank you very much

Questions about longest6 evaluation

image

Hello, I had one more question about this if you don't mind.

The 1st question: In my comprehension of your answer of previous issues, longest6 sets vehicle number to be maximum of each town, and contains no pedestrians. But in the graph of your paper there exists the Ped term. Does this mean you set a fraction of the spawn points to be pedestrians? (If that is true, could you please tell me where this amount is set in the codebase?)

The 2nd question: Comparing Table6 and 7 in the paper, I see that the column 'Stop' Exists only in Table7. Does that mean longest 6 does not set penalty for stop sign infractions?

The 3rd question: I wonder if you can release some more detailed statistics of some some of your key experiments? (Such as the .csv file produced by result_parser.py, or the evaluation videos if saved them) I am using other simple methods to test on longest6 and I noticed that some of the routes are extremely hard, such as RouteScenario_13, where the agent needs to turn around at a roundabout to reach a target point in the backward side of the ego vehicle. I am curious about the performance on specific routes of your sota method. (The 69 point version in the graph.)

unable to reproduce correct model

hello, I trained the model with the config below.

torchrun --nnodes=1 --nproc_per_node=6 --max_restarts=1 --rdzv_id=42353467
--rdzv_backend=c10d ./team_code/train.py --id train_id_002 --batch_size 32
--setting all --root_dir ./data --logdir ./output --use_controller_input_prediction 1
--use_wp_gru 0 --use_discrete_command 1 --use_tp 1 --continue_epoch 1 --cpu_cores 1 --num_repetitions 3 \

I didn't change anything else. The model can not drive properly in longest6, is there anything I missed to set in the config?
Thank you.

setup_carla.sh does not find the download files

Hi,

Thank you for an amazing repo! I run in to a problem when installing carla with your setup_carla.sh script. I get the following error:

https://carla-releases.s3.eu-west-3.amazonaws.com/Linux/CARLA_0.9.10.1.tar.gz
Resolving carla-releases.s3.eu-west-3.amazonaws.com (carla-releases.s3.eu-west-3.amazonaws.com)... 16.12.18.6, 16.12.20.14
Connecting to carla-releases.s3.eu-west-3.amazonaws.com (carla-releases.s3.eu-west-3.amazonaws.com)|16.12.18.6|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2024-02-29 08:59:44 ERROR 404: Not Found.

https://carla-releases.s3.eu-west-3.amazonaws.com/Linux/AdditionalMaps_0.9.10.1.tar.gz
Resolving carla-releases.s3.eu-west-3.amazonaws.com (carla-releases.s3.eu-west-3.amazonaws.com)... 16.12.18.6, 16.12.20.14
Connecting to carla-releases.s3.eu-west-3.amazonaws.com (carla-releases.s3.eu-west-3.amazonaws.com)|16.12.18.6|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2024-02-29 08:59:44 ERROR 404: Not Found

Thank you for all help!

training routes for s1 and s3 are same?

Hello, I noticed that route waypoint and scenarios transform are same for s1 and s3, s1 is control loss, s3 is right turn, could you please explain why?
Thank you.

carla_garage/leaderboard/data/training/routes/s1/Town01_Scenario1.xml
carla_garage/leaderboard/data/training/routes/s3/Town01_Scenario3.xml

carla_garage/leaderboard/data/training/scenarios/s1/Town01_Scenario1.json
carla_garage/leaderboard/data/training/scenarios/s3/Town01_Scenario3.json

I can't evaluate the pretained model.

(carla15) (base) officepc@officepc-MS-7D90:~/Desktop/CARLA_0.9.15/carla_garage$ python leaderboard_evaluator_local.py --agent-config pretrained_models/lav/aim_02_05_withheld_0 --agent team_code/sensor_agent.py --routes /home/officepc/Desktop/CARLA_0.9.15/carla_garage/leaderboard/data/lav.xml --scenarios /home/officepc/Desktop/CARLA_0.9.15/carla_garage/leaderboard/data/scenarios/eval_scenarios.json
leaderboard_evaluator_local.py:96: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if LooseVersion(dist.version) < LooseVersion('0.9.10'):
Starting new route.
Load and run scenarios.

========= Preparing RouteScenario_0 (repetition 0) =========

Setting up the agent
Uncertainty weighting?: 1
Direct control prediction?: 1
Reduce target speed value by two m/s.
Use stop sign controller: 0
pretrained_models/lav/aim_02_05_withheld_0/model_0030.pth
0it [00:00, ?it/s]
Loading 0 lidars from 0 folders
Total amount of routes: 0
Crashed routes: 0
Perfect routes: 0
0it [00:00, ?it/s]
Loading 0 lidars from 0 folders
Total amount of routes: 0
Crashed routes: 0
Perfect routes: 0
Loading the world

The scenario could not be loaded:

The CARLA server uses the wrong map!This scenario requires to use map Town02

Traceback (most recent call last):
File "leaderboard_evaluator_local.py", line 339, in _load_and_run_scenario
self._load_and_wait_for_world(args, config.town, config.ego_vehicles)
File "leaderboard_evaluator_local.py", line 233, in _load_and_wait_for_world
"This scenario requires to use map {}".format(town))
Exception: The CARLA server uses the wrong map!This scenario requires to use map Town02

Registering the route statistics

ZeroDivisionError: float division by zero

Hi, thank you sincerely for your contribution! I used this repo to test some cases and got the result files of json and checkpoint, while when I tried to run result_parser.py, it showed that:

$ python ${WORK_DIR}/tools/result_parser.py --xml ${WORK_DIR}/leaderboard/data/lav.xml --results /home/wanzhou/Downloads/data2/wanzhou/carla_garage/results_file --log_dir /home/wanzhou/Downloads/data2/wanzhou/carla_garage/results_dir --visualize_infractions
Traceback (most recent call last):
File "/data2/wanzhou/carla_garage/tools/result_parser.py", line 1071, in
main()
File "/data2/wanzhou/carla_garage/tools/result_parser.py", line 998, in main
route_scenarios, unique_ids, unique_infractions = csv_parser.parse(root)
File "/data2/wanzhou/carla_garage/tools/result_parser.py", line 745, in parse
route_evaluation, total_score_labels, total_score_values = self.aggregate_files(route_matching)
File "/data2/wanzhou/carla_garage/tools/result_parser.py", line 527, in aggregate_files
avg_km_h_speed = total_km_driven / total_driven_hours
ZeroDivisionError: float division by zero

There shouldn't be any problems with the test results, but it seems the data wasn't being handled correctly, any insights?

Question about the "route" in measurements file

Hi, Thanks for your great work. I'd like to use the code to save the hd_map data. I notice that the route in measurements seems like to be the map information. Can you give me some details. Thanks a lot!

LAV benchmark

Hi! Thanks for your great work!

I would have one question for clarification w.r.t. the "LAV" benchmark. In their original work they seem to be using the default scenario file provided by the CARLA Leaderboard 1.0. When interpreting your code/docs correctly, you are using the scenario file from the Transfuser paper, and only the routes from the LAV paper:

The --scenarios option should be set to eval_scenarios.json for both benchmarks.

Is this correct? Are there any reasons for doing so?

The traffic density seems to be the same as in the original LAV paper:

elif os.getenv('BENCHMARK') == 'lav':

Are there any other differences?

Question about transfuser++ dataset (train & validation)

Hello, thank you for your great work first. I'd like to ask the following question about your latest paper transfuser++:

In table6 and table7 you mentioned that the longset6 is used as training towns and LAV is used as validation. Does that mean that you collected the expert data (for imitation) in routes of longest6?

Also, in appendix C I see that 'we test our additions to TransFuser by repeating the experiments on the training towns (Longest6) while training with all data. ' I wonder what does 'all data' mean here, comparing to the dada you used before.

Shortly, I am kind of confused by the category of the data you used (By town or by route?). I would be greatful if you can make it clearer to me. : )

Some questions about performance gap

Hi Bernhard Jaeger, thanks for release the model(.pth file) and datasets. I think this repo can serve as a baseline for fair comparison.

I want to run TF++ method on Longest6 benchmark, and the excepted result is DS:72 RC:95 IS:0.74 according to table 6 in paper.

So I use the .pth file from 'pertained_models/longest6/tfpp_all_0' to run the evaluation, and the result is DS:57.58 RC:85.25 IS:0.694. When use the .pth file from 'pertained_models/longest6/tfpp_all_1', the result is DS:62.68 RC:90.17 IS:0.7.

These results are all beyond the std in table 6. Could you tell me some details about how can I get the correct result on Longest6?

Understand and Visualize the Waypoint Predictions

I am trying to visualize and understand the waypoint predictions from the TF++ model.

I added the following piece of code to team_code/sensor_agent.py to try and visualize the Waypoints and the corresponding control translations via PID:

# Print Controls
print(f"[Model] Throttle: {throttle} | Steer: {steer} | Brake: {brake}")

# Inits
fig = plt.figure()
canvas = fig.canvas
ax = fig.gca()

# Plotting
pts = np.squeeze(self.pred_wp.detach().cpu().numpy())
ax.scatter(pts[:, 0], pts[:, 1], cmap='viridis', c=range(pts.shape[0]), s=100)

# Visualization
ax.axis('off')
canvas.draw()
image_flat = np.frombuffer(canvas.tostring_rgb(), dtype='uint8')
image = image_flat.reshape(*reversed(canvas.get_width_height()), 3)
raw_rgb = tick_data['rgb'].squeeze().permute(1, 2, 0).cpu().numpy()
cv2.imshow('Image', raw_rgb.astype(np.uint8)[:,:,::-1]) # The RGB Input
cv2.imshow('waypoints', image) # The Waypoints Plot
cv2.waitKey(1)

I am trying to visualize the live waypoints either in a BEV space similar to Figure 2(b) in the paper, or in the RGB camera similar to this Video by Wayve.

However, when I plot the raw waypoint values, they don't correspond to the path ahead (as shown in the two images below, where the road is straight, yet the waypoint plots differ completely).

Please explain the raw waypoint predictions of the model and how to interpret/visualize them. Also, if there is any transformation that needs to be done, please share a code snippet for the same.

Thanks!

Screenshot 1:
Screenshot from 2023-10-18 20-32-31

Screenshot 2:
Screenshot from 2023-10-18 20-32-54

Sensor error with PlanT

Hi,
I am running PlanT through the local_evaluation.sh file, using leaderboard and scenario_runner. I am running this on Town02 route. It runs fine on the first route but gives the following error at the start of the second route:

An exception occurred: 'ServerSideSensor' object is not callable
carla_garage/leaderboard/leaderboard/envs/sensor_interface.py(270)get_data():                                                      
raise SensorReceivedNoData("A sensor took too long to send their data")

Could you please let me know how to resolve this issue?

Clariffication: Pretrained Leaderboard Model

Hello, thanks for the awe-inspiring work!

I'd be very grateful if you can kindly clarify whether the pretrained leaderboard model refers to TF++(all towns) as depicted in Table 12.
I really appreciate any help you can provide.

bev_semantics_augmented/0039.png size is 0

The size of following file is 0, will cause issue during training.
s1_dataset_2023_05_10/Routes_Town04_Scenario1_Repetition2/Town04_Scenario1_route49_05_11_16_31_34/bev_semantics_augmented/0039.png
Any chance to update it?
Thank you.
Related issue - #21

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.