There has been remarkable progress on object detection and re-identification in recent years which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned. In this work, we study the essential reasons behind the failure, and accordingly present a simple baseline to addresses the problems. It remarkably outperforms the state-of-the-arts on the MOT challenge datasets at 30 FPS. We hope this baseline could inspire and help evaluate new ideas in this field.
News
(2020.09.10) A new version of FairMOT is released! (73.7 MOTA on MOT17)
Main updates
We pretrain FairMOT on the CrowdHuman dataset using a self-supervised learning approach.
To detect bounding boxes outside the image, we use left, top, right and bottom (4 channel) to replace the WH head (2 channel).
Tracking performance
Results on MOT challenge test set
Dataset
MOTA
IDF1
IDS
MT
ML
FPS
2DMOT15
60.6
64.7
591
47.6%
11.0%
30.5
MOT16
74.9
72.8
1074
44.7%
15.9%
25.9
MOT17
73.7
72.3
3303
43.2%
17.3%
25.9
MOT20
61.8
67.3
5243
68.8%
7.6%
13.2
All of the results are obtained on the MOT challenge evaluation server under the “private detector” protocol. We rank first among all the trackers on 2DMOT15, MOT16, MOT17 and MOT20. The tracking speed of the entire system can reach up to 30 FPS.
Video demos on MOT challenge test set
Installation
Clone this repo, and we’ll call the directory that you cloned as ${FAIRMOT_ROOT}
Install dependencies. We use python 3.7 and pytorch >= 1.2.0
We use DCNv2 in our backbone network and more details can be found in their repo.
git clone https://github.com/CharlesShang/DCNv2
cd DCNv2
./make.sh
In order to run the code for demos, you also need to install ffmpeg.
Data preparation
CrowdHuman
The CrowdHuman dataset can be downloaded from their official webpage. After downloading, you should prepare the data in the following structure:
Then, you can change the paths in src/gen_labels_crowd.py and run:
cd src
python gen_labels_crowd.py
MIX
We use the same training data as JDE in this part and we call it “MIX”. Please refer to their DATA ZOO to download and prepare all the training data including Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17 and MOT16.
2DMOT15 and MOT202DMOT15 and MOT20 can be downloaded from the official webpage of MOT challenge. After downloading, you should prepare the data in the following structure:
Our baseline FairMOT model (DLA-34 backbone) is pretrained on the CrowdHuman for 60 epochs with the self-supervised learning approach and then trained on the MIX dataset for 30 epochs. The models can be downloaded here:
crowdhuman_dla34.pth [Google][Baidu, code:ggzx ][Onedrive].
fairmot_dla34.pth [Google][Baidu, code:uouv][Onedrive]. (This is the model we get 73.7 MOTA on the MOT17 test set. )
After downloading, you should put the baseline model in the following structure:
Change the dataset root directory ‘root’ in src/lib/cfg/data.json and ‘data_dir’ in src/lib/opts.py
Pretrain on CrowdHuman and train on MIX:
sh experiments/crowdhuman_dla34.sh
sh experiments/mix_ft_ch_dla34.sh
Only train on MIX:
sh experiments/mix_dla34.sh
Only train on MOT17:
sh experiments/mot17_dla34.sh
Finetune on 2DMOT15 using the baseline model:
sh experiments/mot15_ft_mix_dla34.sh
Finetune on MOT20 using the baseline model:
sh experiments/mot20_ft_mix_dla34.sh
For ablation study, we use MIX and half of MOT17 as training data, you can use different backbones such as ResNet, ResNet-FPN, HRNet and DLA:
sh experiments/mix_mot17_half_dla34.sh
sh experiments/mix_mot17_half_hrnet18.sh
sh experiments/mix_mot17_half_res34.sh
sh experiments/mix_mot17_half_res34fpn.sh
sh experiments/mix_mot17_half_res50.sh
Performance on the test set of MOT17 when using different training data:
Training Data
MOTA
IDF1
IDS
MOT17
69.8
69.9
3996
MIX
72.9
73.2
3345
CrowdHuman + MIX
73.7
72.3
3303
Tracking
The default settings run tracking on the validation dataset from 2DMOT15. Using the baseline model, you can run:
cd src
python track.py mot --load_model ../models/fairmot_dla34.pth --conf_thres 0.6
to see the tracking results (76.5 MOTA and 79.3 IDF1 using the baseline model). You can also set save_images=True in src/track.py to save the visualization results of each frame.
For ablation study, we evaluate on the other half of the training set of MOT17, you can run:
cd src
python track_half.py mot --load_model ../exp/mot/mix_mot17_half_dla34.pth --conf_thres 0.4 --val_mot17 True
If you use our pretrained model ‘mix_mot17_half_dla34.pth’, you can get 69.1 MOTA and 72.8 IDF1.
To get the txt results of the test set of MOT16 or MOT17, you can run:
cd src
python track.py mot --test_mot17 True --load_model ../models/fairmot_dla34.pth --conf_thres 0.4
python track.py mot --test_mot16 True --load_model ../models/fairmot_dla34.pth --conf_thres 0.4
and send the txt files to the MOT challenge evaluation server to get the results. (You can get the SOTA results 73+ MOTA on MOT17 test set using the baseline model ‘fairmot_dla34.pth’.)
To get the SOTA results of 2DMOT15 and MOT20, run the tracking code:
cd src
python track.py mot --test_mot15 True --load_model your_mot15_model.pth --conf_thres 0.3
python track.py mot --test_mot20 True --load_model your_mot20_model.pth --conf_thres 0.3
Results of the test set all need to be evaluated on the MOT challenge server. You can see the tracking results on the training set by setting –val_motxx True and run the tracking code. We set ‘conf_thres’ 0.4 for MOT16 and MOT17. We set ‘conf_thres’ 0.3 for 2DMOT15 and MOT20.
Demo
You can input a raw video and get the demo video by running src/demo.py and get the mp4 format of the demo video:
cd src
python demo.py mot --load_model ../models/fairmot_dla34.pth --conf_thres 0.4
You can change –input-video and –output-root to get the demos of your own videos.
–conf_thres can be set from 0.3 to 0.7 depending on your own videos.
Train on custom dataset
You can train FairMOT on custom dataset by following several steps bellow:
Generate one txt label file for one image. Each line of the txt label file represents one object. The format of the line is: “class id x_center/img_width y_center/img_height w/img_width h/img_height”. You can modify src/gen_labels_16.py to generate label files for your custom dataset.
Generate files containing image paths. The example files are in src/data/. Some similar code can be found in src/gen_labels_crowd.py
Create a json file for your custom dataset in src/lib/cfg/. You need to specify the “root” and “train” keys in the json file. You can find some examples in src/lib/cfg/.
Add –data_cfg ‘../src/lib/cfg/your_dataset.json’ when training.
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
关于
论文“FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking”官方开源代码。简单高效的多目标跟踪方法。
FairMOT
A simple baseline for one-shot multi-object tracking:
Abstract
There has been remarkable progress on object detection and re-identification in recent years which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned. In this work, we study the essential reasons behind the failure, and accordingly present a simple baseline to addresses the problems. It remarkably outperforms the state-of-the-arts on the MOT challenge datasets at 30 FPS. We hope this baseline could inspire and help evaluate new ideas in this field.
News
Main updates
Tracking performance
Results on MOT challenge test set
All of the results are obtained on the MOT challenge evaluation server under the “private detector” protocol. We rank first among all the trackers on 2DMOT15, MOT16, MOT17 and MOT20. The tracking speed of the entire system can reach up to 30 FPS.
Video demos on MOT challenge test set
Installation
Data preparation
Pretrained models and baseline model
DLA-34 COCO pretrained model: DLA-34 official. HRNetV2 ImageNet pretrained model: HRNetV2-W18 official, HRNetV2-W32 official. After downloading, you should put the pretrained models in the following structure:
Our baseline FairMOT model (DLA-34 backbone) is pretrained on the CrowdHuman for 60 epochs with the self-supervised learning approach and then trained on the MIX dataset for 30 epochs. The models can be downloaded here: crowdhuman_dla34.pth [Google] [Baidu, code:ggzx ] [Onedrive]. fairmot_dla34.pth [Google] [Baidu, code:uouv] [Onedrive]. (This is the model we get 73.7 MOTA on the MOT17 test set. ) After downloading, you should put the baseline model in the following structure:
Training
Tracking
The default settings run tracking on the validation dataset from 2DMOT15. Using the baseline model, you can run:
to see the tracking results (76.5 MOTA and 79.3 IDF1 using the baseline model). You can also set save_images=True in src/track.py to save the visualization results of each frame.
For ablation study, we evaluate on the other half of the training set of MOT17, you can run:
If you use our pretrained model ‘mix_mot17_half_dla34.pth’, you can get 69.1 MOTA and 72.8 IDF1.
To get the txt results of the test set of MOT16 or MOT17, you can run:
and send the txt files to the MOT challenge evaluation server to get the results. (You can get the SOTA results 73+ MOTA on MOT17 test set using the baseline model ‘fairmot_dla34.pth’.)
To get the SOTA results of 2DMOT15 and MOT20, run the tracking code:
Results of the test set all need to be evaluated on the MOT challenge server. You can see the tracking results on the training set by setting –val_motxx True and run the tracking code. We set ‘conf_thres’ 0.4 for MOT16 and MOT17. We set ‘conf_thres’ 0.3 for 2DMOT15 and MOT20.
Demo
You can input a raw video and get the demo video by running src/demo.py and get the mp4 format of the demo video:
You can change –input-video and –output-root to get the demos of your own videos. –conf_thres can be set from 0.3 to 0.7 depending on your own videos.
Train on custom dataset
You can train FairMOT on custom dataset by following several steps bellow:
Acknowledgement
A large part of the code is borrowed from Zhongdao/Towards-Realtime-MOT and xingyizhou/CenterNet. Thanks for their wonderful works.
Citation