Cut and Learn for Unsupervised Image & Video Object Detection and Instance Segmentation
Cut-and-LEaRn (CutLER) is a simple approach for training object detection and instance segmentation models without human annotations.
It outperforms previous SOTA by 2.7 times for AP50 and 2.6 times for AR on 11 benchmarks.
Unsupervised video instance segmentation (VideoCutLER) is also supported. We demonstrate that video instance segmentation models can be learned without using any human annotations, without relying on natural videos (ImageNet data alone is sufficient), and even without motion estimations! The code is available here.
We propose MaskCut approach to generate pseudo-masks for multiple objects in an image.
CutLER can learn unsupervised object detectors and instance segmentors solely on ImageNet-1K.
CutLER exhibits strong robustness to domain shifts when evaluated on 11 different benchmarks across domains like natural images, video frames, paintings, sketches, etc.
CutLER can serve as a pretrained model for fully/semi-supervised detection and segmentation tasks.
We also propose VideoCutLER, a surprisingly simple unsupervised video instance segmentation (UVIS) method without relying on optical flows. ImaegNet-1K is all we need for training a SOTA UVIS model!
If you want to run MaskCut locally, we provide demo.py that is able to visualize the pseudo-masks produced by MaskCut.
Run it with:
cd maskcut
python demo.py --img-path imgs/demo2.jpg \
--N 3 --tau 0.15 --vit-arch base --patch-size 8 \
[--other-options]
We give a few demo images in maskcut/imgs/. If you want to run demo.py with cpu, simply add “–cpu” when running the demo script.
For imgs/demo4.jpg, you need to use “–N 6” to segment all six instances in the image.
Following, we give some visualizations of the pseudo-masks on the demo images.
Generating Annotations for ImageNet-1K with MaskCut
To generate pseudo-masks for ImageNet-1K using MaskCut, first set up the ImageNet-1K dataset according to the instructions in datasets/README.md, then execute the following command:
As the process of generating pseudo-masks for all 1.3 million images in 1,000 folders takes a significant amount of time, it is recommended to use multiple runs. Each run should process the pseudo-mask generation for a smaller number of image folders by setting “–num-folder-per-job” and “–job-index”. Once all runs are completed, you can merge all the resulting json files by using the following command:
Pick a model and its config file from model zoo,
for example, model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml.
We provide demo.py that is able to demo builtin configs. Run it with:
```
cd cutler
python demo/demo.py –config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN_demo.yaml \
The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation.
This command will run the inference and show visualizations in an OpenCV window.
<!-- For details of the command line arguments, see `demo.py -h` or look at its source code
to understand its behavior. Some common arguments are: -->
* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`.
* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`.
Following, we give some visualizations of the model predictions on the demo images.
<p align="center">
<img src="https://www.gitlink.org.cn/api/caracal/CutLER/raw/docs/cutler-demo.jpg?ref=main" width=100%>
</p>
### Unsupervised Model Learning
Before training the detector, it is necessary to use MaskCut to generate pseudo-masks for all ImageNet data.
You can either use the pre-generated json file directly by downloading it from [here](http://dl.fbaipublicfiles.com/cutler/maskcut/imagenet_train_fixsize480_tau0.15_N3.json) and placing it under "DETECTRON2_DATASETS/imagenet/annotations/", or generate your own pseudo-masks by following the instructions in [MaskCut](#1-maskcut).
We provide a script `train_net.py`, that is made to train all the configs provided in CutLER.
To train a model with "train_net.py", first setup the ImageNet-1K dataset following [datasets/README.md](//caracal/CutLER/tree/main/datasets/README.md), then run:
cd cutler
export DETECTRON2_DATASETS=/path/to/DETECTRON2_DATASETS/
python train_net.py –num-gpus 8 –config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml
If you want to train a model using multiple nodes, you may need to adjust [some model parameters](https://arxiv.org/abs/1706.02677) and some SBATCH command options in "tools/train-1node.sh" and "tools/single-node_run.sh", then run:
cd cutler
sbatch tools/train-1node.sh –config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml MODEL.WEIGHTS /path/to/dino/d2format/model OUTPUT_DIR output/
You can also convert a pre-trained DINO model to detectron2's format by yourself following [this link](https://github.com/facebookresearch/moco/tree/main/detection).
### Self-training
We further improve performance by self-training the model on its predictions.
Firstly, we can get model predictions on ImageNet via running:
python train_net.py –num-gpus 8 –config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml –test-dataset imagenet_train –eval-only TEST.DETECTIONS_PER_IMAGE 30 MODEL.WEIGHTS output/model_final.pth \ # load previous stage/round checkpoints
OUTPUT_DIR output/ # path to save model predictions
Secondly, we can run the following command to generate the json file for the first round of self-training:
python tools/get_self_training_ann.py –new-pred output/inference/coco_instances_results.json \ # load model predictions
–prev-ann DETECTRON2_DATASETS/imagenet/annotations/imagenet_train_fixsize480_tau0.15_N3.json \ # path to the old annotation file.
–save-path DETECTRON2_DATASETS/imagenet/annotations/cutler_imagenet1k_train_r1.json \ # path to save a new annotation file.
–threshold 0.7
Finally, place "cutler_imagenet1k_train_r1.json" under "DETECTRON2_DATASETS/imagenet/annotations/", then launch the self-training process:
You can repeat the steps above to perform multiple rounds of self-training and adjust some arguments as needed (e.g., "--threshold" for round 1 and 2 can be set to 0.7 and 0.65, respectively; "--train-dataset" for round 1 and 2 can be set to "imagenet_train_r1" and "imagenet_train_r2", respectively; MODEL.WEIGHTS for round 1 and 2 should point to the previous stage/round checkpoints). Ensure that all annotation files are placed under DETECTRON2_DATASETS/imagenet/annotations/.
Please ensure that "--train-dataset", json file names and locations match the ones specified in "cutler/data/datasets/builtin.py".
Please refer to this [instruction](https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html) for guidance on using custom datasets.
You can also directly download the MODEL.WEIGHTS and annotations used for each round of self-training:
<table><tbody>
<!-- START TABLE -->
<!-- TABLE BODY -->
<!-- ROW: round 1 -->
<tr><td align="center">round 1</td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_cascade_r1.pth">cutler_cascade_r1.pth</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/maskcut/cutler_imagenet1k_train_r1.json">cutler_imagenet1k_train_r1.json</a></td>
</tr>
<!-- ROW: round 2 -->
<tr><td align="center">round 2</td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_cascade_r2.pth">cutler_cascade_r2.pth</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/maskcut/cutler_imagenet1k_train_r2.json">cutler_imagenet1k_train_r2.json</a></td>
</tr>
</tbody></table>
### Unsupervised Zero-shot Evaluation
To evaluate a model's performance on 11 different datasets, please refer to [datasets/README.md](//caracal/CutLER/tree/main/datasets/README.md) for instructions on preparing the datasets. Next, select a model from the model zoo, specify the "model_weights", "config_file" and the path to "DETECTRON2_DATASETS" in `tools/eval.sh`, then run the script.
bash tools/eval.sh
### Model Zoo
We show zero-shot unsupervised object detection performance (AP50 | AR) on 11 different datasets spanning a variety of domains. ^: CutLER using Mask R-CNN as a detector; *: CutLER using Cascade Mask R-CNN as a detector.
<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Methods</th>
<th valign="bottom">Models</th>
<th valign="bottom">COCO</th>
<th valign="bottom">COCO20K</th>
<th valign="bottom">VOC</th>
<th valign="bottom">LVIS</th>
<th valign="bottom">UVO</th>
<th valign="bottom">Clipart</th>
<th valign="bottom">Comic</th>
<th valign="bottom">Watercolor</th>
<th valign="bottom">KITTI</th>
<th valign="bottom">Objects365</th>
<th valign="bottom">OpenImages</th>
<!-- TABLE BODY -->
</tr>
<tr><td align="center">Prev. SOTA</td>
<td valign="bottom">-</td>
<td align="center">9.6 | 12.6</td>
<td align="center">9.7 | 12.6</td>
<td align="center">15.9 | 21.3</td>
<td align="center">3.8 | 6.4</td>
<td align="center">10.0 | 14.2</td>
<td align="center">7.9 | 15.1</td>
<td align="center">9.9 | 16.3</td>
<td align="center">6.7 | 16.2</td>
<td align="center">7.7 | 7.1</td>
<td align="center">8.1 | 10.2</td>
<td align="center">9.9 | 14.9</td>
</tr>
<!-- ROW: Box/Mask AP for CutLER -->
</tr>
<tr><td align="center">CutLER^</td>
<td valign="bottom"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_mrcnn_final.pth">download</a></td>
<td align="center">21.1 | 29.6</td>
<td align="center">21.6 | 30.0</td>
<td align="center">36.6 | 41.0</td>
<td align="center">7.7 | 18.7</td>
<td align="center">29.8 | 38.4</td>
<td align="center">20.9 | 38.5</td>
<td align="center">31.2 | 37.1</td>
<td align="center">37.3 | 39.9</td>
<td align="center">15.3 | 25.4</td>
<td align="center">19.5 | 30.0</td>
<td align="center">17.1 | 26.4</td>
</tr>
<!-- ROW: Box/Mask AP for CutLER -->
</tr>
<tr><td align="center">CutLER*</td>
<td valign="bottom"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_cascade_final.pth">download</a></td>
<td align="center">21.9 | 32.7</td>
<td align="center">22.4 | 33.1</td>
<td align="center">36.9 | 44.3</td>
<td align="center">8.4 | 21.8</td>
<td align="center">31.7 | 42.8</td>
<td align="center">21.1 | 41.3</td>
<td align="center">30.4 | 38.6</td>
<td align="center">37.5 | 44.6</td>
<td align="center">18.4 | 27.5</td>
<td align="center">21.6 | 34.2</td>
<td align="center">17.3 | 29.6</td>
</tr>
</tbody></table>
## Semi-supervised and Fully-supervised Learning
CutLER can also serve as a pretrained model for training fully supervised object detection and instance segmentation models and improves performance on COCO, including on few-shot benchmarks.
### Training & Evaluation in Command Line
You can find all the semi-supervised and fully-supervised learning configs provided in CutLER under `model_zoo/configs/COCO-Semisupervised`.
To train a model using K% labels with `train_net.py`, first set up the COCO dataset according to [datasets/README.md](//caracal/CutLER/tree/main/datasets/README.md) and specify K value in the config file, then run:
You can find all config files used to train supervised models under `model_zoo/configs/COCO-Semisupervised`.
The configs are made for 8-GPU training. To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g. number of GPUs (num-gpus your_num_gpus), learning rates (SOLVER.BASE_LR your_base_lr) and batch size (SOLVER.IMS_PER_BATCH your_batch_size).
### Evaluation
To evaluate a model's performance, use
For more options, see `python train_net.py -h`.
### Model Zoo
We fine-tune a Cascade R-CNN model initialized with CutLER or MoCo-v2 on varying amounts of labeled COCO data, and show results (Box | Mask AP) on the val2017 split below:
<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">% of labels</th>
<th valign="bottom">1%</th>
<th valign="bottom">2%</th>
<th valign="bottom">5%</th>
<th valign="bottom">10%</th>
<th valign="bottom">20%</th>
<th valign="bottom">30%</th>
<th valign="bottom">40%</th>
<th valign="bottom">50%</th>
<th valign="bottom">60%</th>
<th valign="bottom">80%</th>
<th valign="bottom">100%</th>
<!-- TABLE BODY -->
<!-- ROW: Box/Mask AP for CutLER -->
<tr><td align="center">MoCo-v2</td>
<td align="center">11.8 | 10.0</td>
<td align="center">16.2 | 13.8</td>
<td align="center">20.5 | 17.8</td>
<td align="center">26.5 | 23.0</td>
<td align="center">32.5 | 28.2</td>
<td align="center">35.5 | 30.8</td>
<td align="center">37.3 | 32.3</td>
<td align="center">38.7 | 33.6</td>
<td align="center">39.9 | 34.6</td>
<td align="center">41.6 | 36.0</td>
<td align="center">42.8 | 37.0</td>
</tr>
<!-- ROW: Mask AP -->
<tr><td align="center">CutLER</td>
<td align="center">16.8 | 14.6</td>
<td align="center">21.6 | 18.9</td>
<td align="center">27.8 | 24.3</td>
<td align="center">32.2 | 28.1</td>
<td align="center">36.6 | 31.7</td>
<td align="center">38.2 | 33.3</td>
<td align="center">39.9 | 34.7</td>
<td align="center">41.5 | 35.9</td>
<td align="center">42.3 | 36.7</td>
<td align="center">43.8 | 37.9</td>
<td align="center">44.7 | 38.5</td>
</tr>
<!-- ROW: Model Downloads -->
<tr><td align="center">Download</td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_1perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_2perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_5perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_10perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_20perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_30perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_40perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_50perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_60perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_semi_80perc.pth">model</a></td>
<td align="center"><a href="http://dl.fbaipublicfiles.com/cutler/checkpoints/cutler_fully_100perc.pth">model</a></td>
</tr>
</tbody></table>
Both MoCo-v2 and our CutLER are trained for the 1x schedule using Detectron2, except for extremely low-shot settings with 1% or 2% labels. When training with 1% or 2% labels, we train both MoCo-v2 and our model for 3,600 iterations with a batch size of 16.
## License
The majority of CutLER, Detectron2 and DINO are licensed under the [CC-BY-NC license](LICENSE), however portions of the project are available under separate license terms: TokenCut, Bilateral Solver and CRF are licensed under the MIT license; If you later add other third party code, please keep this license info updated, and please let us know if that component is licensed under something other than CC-BY-NC, MIT, or CC0.
## Ethical Considerations
CutLER's wide range of detection capabilities may introduce similar challenges to many other visual recognition methods.
As the image can contain arbitrary instances, it may impact the model output.
## How to get support from us?
If you have any general questions, feel free to email us at [Xudong Wang](mailto:xdwang@eecs.berkeley.edu), [Ishan Misra](mailto:imisra@meta.com) and [Rohit Girdhar](mailto:rgirdhar@meta.com). If you have code or implementation-related questions, please feel free to send emails to us or open an issue in this codebase (We recommend that you open an issue in this codebase, because your questions may help others).
## Citation
If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation.
@inproceedings{wang2023cut,
title={Cut and learn for unsupervised object detection and instance segmentation},
author={Wang, Xudong and Girdhar, Rohit and Yu, Stella X and Misra, Ishan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3124–3134},
year={2023}
}
@article{wang2023videocutler,
title={VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation},
author={Wang, Xudong and Misra, Ishan and Zeng, Ziyun and Girdhar, Rohit and Darrell, Trevor},
journal={arXiv preprint arXiv:2308.14710},
year={2023}
}
Cut and Learn for Unsupervised Image & Video Object Detection and Instance Segmentation
Cut-and-LEaRn (CutLER) is a simple approach for training object detection and instance segmentation models without human annotations. It outperforms previous SOTA by 2.7 times for AP50 and 2.6 times for AR on 11 benchmarks.
[
project page
] [arxiv
] [colab
] [bibtex
]Unsupervised video instance segmentation (VideoCutLER) is also supported. We demonstrate that video instance segmentation models can be learned without using any human annotations, without relying on natural videos (ImageNet data alone is sufficient), and even without motion estimations! The code is available here.
[
code
] [PDF
] [arxiv
] [bibtex
]Features
Installation
See installation instructions.
Dataset Preparation
See Preparing Datasets for CutLER.
Method Overview
1. MaskCut
MaskCut can be used to provide segmentation masks for multiple instances of each image.
MaskCut Demo
Try out the MaskCut demo using Colab (no GPU needed):
Try out the web demo:
(thanks to @hysts!)
If you want to run MaskCut locally, we provide
demo.py
that is able to visualize the pseudo-masks produced by MaskCut. Run it with:We give a few demo images in maskcut/imgs/. If you want to run demo.py with cpu, simply add “–cpu” when running the demo script. For imgs/demo4.jpg, you need to use “–N 6” to segment all six instances in the image. Following, we give some visualizations of the pseudo-masks on the demo images.
Generating Annotations for ImageNet-1K with MaskCut
To generate pseudo-masks for ImageNet-1K using MaskCut, first set up the ImageNet-1K dataset according to the instructions in datasets/README.md, then execute the following command:
As the process of generating pseudo-masks for all 1.3 million images in 1,000 folders takes a significant amount of time, it is recommended to use multiple runs. Each run should process the pseudo-mask generation for a smaller number of image folders by setting “–num-folder-per-job” and “–job-index”. Once all runs are completed, you can merge all the resulting json files by using the following command:
The “–num-folder-per-job”, “–fixed-size”, “–tau” and “–N” of merge_jsons.py should match the ones used to run maskcut.py.
We also provide a submitit script to launch the pseudo-mask generation process with multiple nodes.
After that, you can use “merge_jsons.py” to merge all these json files as described above.
2. CutLER
Inference Demo for CutLER with Pre-trained Models
Try out the CutLER demo using Colab (no GPU needed):
Try out the web demo:
(thanks to @hysts!)
Try out Replicate demo and the API:
If you want to run CutLER demos locally,
model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml
.demo.py
that is able to demo builtin configs. Run it with: ``` cd cutler python demo/demo.py –config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN_demo.yaml \–input demo/imgs/*.jpg
[–other-options] –opts MODEL.WEIGHTS /path/to/cutler_w_cascade_checkpoint
cd cutler export DETECTRON2_DATASETS=/path/to/DETECTRON2_DATASETS/ python train_net.py –num-gpus 8
–config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml
cd cutler sbatch tools/train-1node.sh
–config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml
MODEL.WEIGHTS /path/to/dino/d2format/model
OUTPUT_DIR output/
python train_net.py –num-gpus 8
–config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml
–test-dataset imagenet_train
–eval-only TEST.DETECTIONS_PER_IMAGE 30
MODEL.WEIGHTS output/model_final.pth \ # load previous stage/round checkpoints OUTPUT_DIR output/ # path to save model predictions
python tools/get_self_training_ann.py
–new-pred output/inference/coco_instances_results.json \ # load model predictions –prev-ann DETECTRON2_DATASETS/imagenet/annotations/imagenet_train_fixsize480_tau0.15_N3.json \ # path to the old annotation file. –save-path DETECTRON2_DATASETS/imagenet/annotations/cutler_imagenet1k_train_r1.json \ # path to save a new annotation file. –threshold 0.7
python train_net.py –num-gpus 8
–config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN_self_train.yaml
–train-dataset imagenet_train_r1
MODEL.WEIGHTS output/model_final.pth \ # load previous stage/round checkpoints OUTPUT_DIR output/self-train-r1/ # path to save checkpoints
bash tools/eval.sh
python train_net.py –num-gpus 8
–config-file model_zoo/configs/COCO-Semisupervised/cascade_mask_rcnn_R_50_FPN_{K}perc.yaml
MODEL.WEIGHTS /path/to/cutler_pretrained_model
python train_net.py
–config-file model_zoo/configs/COCO-Semisupervised/cascade_mask_rcnn_R_50_FPN_{K}perc.yaml
–eval-only MODEL.WEIGHTS /path/to/checkpoint_file
@inproceedings{wang2023cut, title={Cut and learn for unsupervised object detection and instance segmentation}, author={Wang, Xudong and Girdhar, Rohit and Yu, Stella X and Misra, Ishan}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={3124–3134}, year={2023} }
@article{wang2023videocutler, title={VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation}, author={Wang, Xudong and Misra, Ishan and Zeng, Ziyun and Girdhar, Rohit and Darrell, Trevor}, journal={arXiv preprint arXiv:2308.14710}, year={2023} }