MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project.
The main branch works with PyTorch 1.8+.
Major features
Support multi-modality/single-modality detectors out of box
It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
Support indoor/outdoor 3D detection out of box
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.
Natural integration with 2D detection
All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.
High efficiency
It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by ✗.
Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.
What’s New
Highlight
In version 1.4, MMDetecion3D refactors the Waymo dataset and accelerates the preprocessing, training/testing setup, and evaluation of Waymo dataset. We also extends the support for camera-based, such as Monocular and BEV, 3D object detection models on Waymo. A detailed description of the Waymo data information is provided here.
Besides, in version 1.4, MMDetection3D provides Waymo-mini to help community users get started with Waymo and use it for quick iterative development.
Note: All the about 500+ models, methods of 90+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.
FAQ
Please refer to FAQ for frequently asked questions.
Contributing
We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.
Acknowledgement
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
Citation
If you find this project useful in your research, please consider cite:
📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🚀Ongoing Projects | 🤔Reporting Issues
English | 简体中文
Introduction
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project.
The main branch works with PyTorch 1.8+.
Major features
Support multi-modality/single-modality detectors out of box
It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
Support indoor/outdoor 3D detection out of box
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.
Natural integration with 2D detection
All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.
High efficiency
It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by
✗
.Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.
What’s New
Highlight
In version 1.4, MMDetecion3D refactors the Waymo dataset and accelerates the preprocessing, training/testing setup, and evaluation of Waymo dataset. We also extends the support for camera-based, such as Monocular and BEV, 3D object detection models on Waymo. A detailed description of the Waymo data information is provided here.
Besides, in version 1.4, MMDetection3D provides Waymo-mini to help community users get started with Waymo and use it for quick iterative development.
v1.4.0 was released in 8/1/2024:
projects
projects
v1.3.0 was released in 18/10/2023:
projects
v1.2.0 was released in 4/7/2023
mmdet3d/configs
projects
mim
v1.1.1 was released in 30/5/2023:
projects
projects
Installation
Please refer to Installation for installation instructions.
Getting Started
For detailed user guides and advanced guides, please refer to our documentation:
User Guides
Advanced Guides
Overview of Benchmark and Model Zoo
Results and models are available in the model zoo.
Note: All the about 500+ models, methods of 90+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.
FAQ
Please refer to FAQ for frequently asked questions.
Contributing
We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.
Acknowledgement
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
Citation
If you find this project useful in your research, please consider cite:
__special_katext_id_0__
License
This project is released under the Apache 2.0 license.
Projects in OpenMMLab