目录
Miguel Méndez

[Fix] bugfix/avoid-runner-iter-in-vis-hook-test-mode (#3596)

Motivation

The current SegVisualizationHook implements the _after_iter method, which is invoked during the validation and testing pipelines. However, when in test_mode, the implementation attempts to access runner.iter. This attribute is defined in the mmengine codebase and is designed to return train_loop.iter. Accessing this property during testing can be problematic, particularly in scenarios where the model is being evaluated post-training, without initiating a training loop. This can lead to a crash if the implementation tries to build a training dataset for which the annotation file is unavailable at the time of evaluation. Thus, it is crucial to avoid relying on this property in test mode.

Modification

To resolve this issue, the proposal is to replace the _after_iter method with after_val_iter and after_test_iter methods, modifying their behavior accordingly. Specifically, when in testing mode, the implementation should utilize a test_index counter instead of accessing runner.iter. This adjustment will circumvent the issue of accessing train_loop.iter during test mode, ensuring the process does not attempt to access or build a training dataset, thereby preventing potential crashes due to missing annotation files.

11个月前989次提交
目录README.md
 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

PyPI - Python Version PyPI docs badge codecov license issue resolution open issues Open in OpenXLab

Documentation: https://mmsegmentation.readthedocs.io/en/latest/

English | 简体中文

Introduction

MMSegmentation is an open source semantic segmentation toolbox based on PyTorch. It is a part of the OpenMMLab project.

The main branch works with PyTorch 1.6+.

🎉 Introducing MMSegmentation v1.0.0 🎉

We are thrilled to announce the official release of MMSegmentation’s latest version! For this new release, the main branch serves as the primary branch, while the development branch is dev-1.x. The stable branch for the previous release remains as the 0.x branch. Please note that the master branch will only be maintained for a limited time before being removed. We encourage you to be mindful of branch selection and updates during use. Thank you for your unwavering support and enthusiasm, and let’s work together to make MMSegmentation even more robust and powerful! 💪

MMSegmentation v1.x brings remarkable improvements over the 0.x release, offering a more flexible and feature-packed experience. To utilize the new features in v1.x, we kindly invite you to consult our detailed 📚 migration guide, which will help you seamlessly transition your projects. Your support is invaluable, and we eagerly await your feedback!

demo image

Major features

  • Unified Benchmark

    We provide a unified benchmark toolbox for various semantic segmentation methods.

  • Modular Design

    We decompose the semantic segmentation framework into different components and one can easily construct a customized semantic segmentation framework by combining different modules.

  • Support of multiple methods out of box

    The toolbox directly supports popular and contemporary semantic segmentation frameworks, e.g. PSPNet, DeepLabV3, PSANet, DeepLabV3+, etc.

  • High efficiency

    The training speed is faster than or comparable to other codebases.

What’s New

v1.2.0 was released on 10/12/2023, from 1.1.0 to 1.2.0, we have added or updated the following features:

Highlights

  • Support for the open-vocabulary semantic segmentation algorithm SAN

  • Support monocular depth estimation task, please refer to VPD and Adabins for more details.

    depth estimation

  • Add new projects: open-vocabulary semantic segmentation algorithm CAT-Seg, real-time semantic segmentation algofithm PP-MobileSeg

Installation

Please refer to get_started.md for installation and dataset_prepare.md for dataset preparation.

Get Started

Please see Overview for the general introduction of MMSegmentation.

Please see user guides for the basic usage of MMSegmentation. There are also advanced tutorials for in-depth understanding of mmseg design and implementation .

A Colab tutorial is also provided. You may preview the notebook here or directly run on Colab.

To migrate from MMSegmentation 0.x, please refer to migration.

Tutorial

MMSegmentation Tutorials
Get Started MMSeg Basic Tutorial MMSeg Detail Tutorial MMSeg Development Tutorial

Benchmark and model zoo

Results and models are available in the model zoo.

Overview
Supported backbones Supported methods Supported Head Supported datasets Other

Please refer to FAQ for frequently asked questions.

Projects

Here are some implementations of SOTA models and solutions built on MMSegmentation, which are supported and maintained by community users. These projects demonstrate the best practices based on MMSegmentation for research and product development. We welcome and appreciate all the contributions to OpenMMLab ecosystem.

Contributing

We appreciate all contributions to improve MMSegmentation. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMSegmentation is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new semantic segmentation methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmseg2020,
    title={{MMSegmentation}: OpenMMLab Semantic Segmentation Toolbox and Benchmark},
    author={MMSegmentation Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmsegmentation}},
    year={2020}
}

License

This project is released under the Apache 2.0 license.

OpenMMLab Family

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
  • MMagic: OpenMMLab Advanced, Generative and Intelligent Creation toolbox.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMDetection3D: OpenMMLab’s next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab’s next-generation action understanding toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMDeploy: OpenMMLab Model Deployment Framework.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MIM: MIM installs OpenMMLab packages.
  • Playground: A central hub for gathering and showcasing amazing projects built upon OpenMMLab.
关于
736.9 MB
邀请码