BirdNet+ software implements a state-of-the-art 3D object detection algorithm based only on LiDAR technology. It represents a clear advance on its predecessor, the BirdNet. Algorithm developed at Intelligent Systems Laboratory, Universidad Carlos III de Madrid.
- The framework behind the algorithm is Detectron2 in Pytorch.
- Removes all the post processing stage using only the network to perform 3D predictions, which improves the detection accuracy.
You will need the Detectron2 requirements:
- Linux with Python ≥ 3.6
- PyTorch ≥ 1.3
- torchvision that matches the PyTorch installation. You can install them together at pytorch.org to make sure of this.
- OpenCV
- pycocotools:
pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
After having the above dependencies and gcc & g++ ≥ 5, run:
git clone https://github.com/AlejandroBarrera/birdnet2
cd birdnet2 && python -m pip install -e .
Please refer to the Detectron2 installation section for more details.
- Add birdnet2 top level to your PYTHONPATH.
export DETECTRON_ROOT=/path/to/birdnet2
export PYTHONPATH=$PYTHONPATH:$DETECTRON_ROOT
- Download the pre-trained model from here and put it in a new folder named models inside DETECTRON_ROOT (you can change this path through the field OUTPUT_DIR in the configuration file Base-BirdNetPlus.yaml)
- Launch the script python demo/demo_BirdNetPlus.py for an example of how BirdNet+ works.
This demo uses data from the KITTI Vision Benchmark Suite.
- Follow the steps 1 and 2 from Quick Start.
- Download the training and validation splits here.
- To train with KITTI object detection dataset:
- Download the dataset.
- Generate BEV KITTI images. The tool that we used can be found at lidar_bev.
- In DETECTRON_ROOT, arrange everything according to the directory tree below (We leave annotations folder empty for now):
.
|-- datasets
| |-- bv_kitti
| | |-- annotations
| | | |-- {train,val} JSON files
| | |-- lists
| | | |-- {train,val} splits
| | |-- image
| | | |-- BEV KITTI {train,val} images
| | |-- label
| | | |-- KITTI {train,val} labels
| | |-- calib
| | | |-- KITTI calibration files
NOTE: In the current version, the label subfolder must contain the original KITTI annotations (label_2).
- Launch python tools/train_net_BirdNetPlus.py --config_file Base-BirdNetPlus with the parameters required inside of the configuration file. The annotations are generated automatically.
- For validation use python tools/val_net_BirdNetPlus.py instead with as many arguments as you want. Please review the arguments carefully. For the evaluation, we strongly recommend using an offline KITTI evaluator such as eval_kitti after obtaining the evaluation annotations.
If you use BirdNet+ in your research, please cite our work using the following BibTeX entry.
@misc{Barrera2020,
title = {{BirdNet+: End-to-End 3D Object Detection in LiDAR Bird's Eye View}},
author = {Barrera, Alejandro and Guindel, Carlos and Beltrán, Jorge and García, Fernando},
booktitle = {arXiv:2003.04188 [cs.CV]},
url = {http://arxiv.org/abs/2003.04188},
year = {2020}
}