Deep Tracklet Feature Representation via Spatial Uncertainty for Multi-Camera Multi-Object Tracking System
- python==3.8
- pytorch==1.8
- pytorch-lightning==1.4.9
- detectron2
- scipy==1.8.1
- opencv-python==4.6.0.66
- imgaug==0.4.0
- pandas==1.1.5
- motmetrics==1.2.5
- filterpy==1.4.5
- cmake==3.22.5
- lapsolver==1.1.0
- torchreid
- others (install whatever is missing)
conda env create -f environment.yaml
Download the following datasets and put into the corresponding folders.
MCMOT/mcmt/dataset/
MCMOT/PETS09-S2L1/dataset/temp
MCMOT/nlprmct/dataset/temp
MCMOT/EPFL/dataset/temp
MCMOT/campus/dataset/temp
Note that the annotaions of PETS09-S2L1, EPFL and CAMPUS are from here.
After downloading the datasets, run the following commands to generate the data that we can use in the code. (Only for pedestrian datasets.)
python generate_*_dataset.py
python normalize_cameraid.py
-
It should be available at here.
-
To train the object detector:
p300_coco.ckp
: 300 proposal bounding boxes, trained on COCOp300_ch17.ckpt
: 300 proposal bounding boxes, trained on CrowdHuman + MOT17 Place them underMCMOT/mot-sprcnn-su/weights/
-
For pedestrian ReID model:
osnet_x1_0_MS_D_C.pth
Place it underMCMOT/mot-sprcnn-su/weights/
-
For vehicle ReID model:
resnet101_ibn_a_2.pth
resnet101_ibn_a_3.pth
resnext101_ibn_a_2.pth
Place them underMCMOT/mot-sprcnn-su/reid/reid_model/
-
Trained weights for object detection:
version8_epoch=39-step=99639.ckpt
: for CityFlowV2 datasetp300_ch17.ckpt
: for PETS09-S2L1 and EPFLv17e39_pets09.ckpt
: for PETS09-S2L1v20e3_nlprmct.ckpt
: for NLPR-MCTv22e23_epfl2.ckpt
: for EPFLv21e11_campus.ckpt
: for CAMPUS Place them underMCMOT/mot-sprcnn-su/weights/
CityFlowV2 and PETS09-S2L1 are used as examples for the following steps.
cd mot-sprcnn-su/
CityFlowV2:
python prepare_coco_train+valid+detrac.py
PETS09-S2L1:
python prepare_ch+pets09.py
If you just want to inference detection, just run the following command:
python prepare_*_videos.py
* can be pets09, nlprmct, etc.
CityFlowV2:
python train.py --train_json jsons/vehicle_train.json --valid_json jsons/vehicle_validation.json --num_proposals 300
PETS09-S2L1:
python train.py --train_json jsons/pets09_train.json --valid_json jsons/pets09_validation.json --num_proposals 300
Object detection and ReID features extraction.
CityFlowV2:
python infer_dets.py --videos videos/aic22_valid --ckpt weights/version8_epoch=39-step=99639.ckpt --dets dets/aic22_valid --score_thresh 0.4 --nms_thresh 0.7 --num_proposals 300
python infer_reid_all.py --videos videos/aic22_valid --dets dets/aic22_valid
PETS09-S2L1:
python infer_dets.py --videos videos/pets09 --ckpt weights/p300_ch17.ckp --dets dets/pets09 --score_thresh 0.4 --nms_thresh 0.7 --num_proposals 300
python infer_osnet.py --videos videos/pets09 --dets dets/pets09
CityFlowV2:
python track_sct+feature.py --videos videos/aic22_valid --dets dets/aic22_valid --outs outs/aic22_valid
PETS09-S2L1:
python track_sct+feature.py --videos videos/pets09 --dets dets/pets09 --outs outs/pets09
CityFlowV2:
cd mcmt/
PETS09-S2L1:
cd PETS09-S2L1
cd pipeline/
python step1_get_img_features_new.py
python step2_mtsc_post_process_new.py
python step3_multi_camera_tracking_new.py
python step4_merge_results_new.py
Evaluation for the first time:
cd ../eval/
pip install -r requirement.txt
python group_gt.py
Pedestrian Datasets
source step5_eval_test.sh
CityFlowV2 Validation Set Run one of the following commands to check the result of each scenario.
source step5_eval_validation.sh
source step5_eval_validation_S02.sh
source step5_eval_validation_S05.sh