MMEngine is a foundational library for training deep learning models based on PyTorch. It serves as the training engine of all OpenMMLab codebases, which support hundreds of algorithms in various research areas. Moreover, MMEngine is also generic to be applied to non-OpenMMLab projects. Its highlights are as follows:
Integrate mainstream large-scale model training frameworks
Taking the training of a ResNet-50 model on the CIFAR-10 dataset as an example, we will use MMEngine to build a complete, configurable training and validation process in less than 80 lines of code.
Build Models
First, we need to define a model which 1) inherits from BaseModel and 2) accepts an additional argument mode in the forward method, in addition to those arguments related to the dataset.
During training, the value of mode is “loss”, and the forward method should return a dict containing the key “loss”.
During validation, the value of mode is “predict”, and the forward method should return results containing both predictions and labels.
import torch.nn.functional as F
import torchvision
from mmengine.model import BaseModel
class MMResNet50(BaseModel):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet50()
def forward(self, imgs, labels, mode):
x = self.resnet(imgs)
if mode == 'loss':
return {'loss': F.cross_entropy(x, labels)}
elif mode == 'predict':
return x, labels
Build Datasets
Next, we need to create Datasets and DataLoaders for training and validation.
In this case, we simply use built-in datasets supported in TorchVision.
To validate and test the model, we need to define a Metric called accuracy to evaluate the model. This metric needs to inherit from BaseMetric and implements the process and compute_metrics methods.
from mmengine.evaluator import BaseMetric
class Accuracy(BaseMetric):
def process(self, data_batch, data_samples):
score, gt = data_samples
# Save the results of a batch to `self.results`
self.results.append({
'batch_size': len(gt),
'correct': (score.argmax(dim=1) == gt).sum().cpu(),
})
def compute_metrics(self, results):
total_correct = sum(item['correct'] for item in results)
total_size = sum(item['batch_size'] for item in results)
# Returns a dictionary with the results of the evaluated metrics,
# where the key is the name of the metric
return dict(accuracy=100 * total_correct / total_size)
Build a Runner
Finally, we can construct a Runner with previously defined Model, DataLoader, and Metrics, with some other configs, as shown below.
from torch.optim import SGD
from mmengine.runner import Runner
runner = Runner(
model=MMResNet50(),
work_dir='./work_dir',
train_dataloader=train_dataloader,
# a wrapper to execute back propagation and gradient update, etc.
optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
# set some training configs like epochs
train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
val_dataloader=val_dataloader,
val_cfg=dict(),
val_evaluator=dict(type=Accuracy),
)
Introduction | Installation | Get Started | 📘Documentation | 🤔Reporting Issues
English | 简体中文
What’s New
v0.10.6 was released on 2025-01-13.
Highlights:
artifact_location
in MLflowVisBackend #1505exclude_frozen_parameters
forDeepSpeedEngine._zero3_consolidated_16bit_state_dict
#1517Read Changelog for more details.
Introduction
MMEngine is a foundational library for training deep learning models based on PyTorch. It serves as the training engine of all OpenMMLab codebases, which support hundreds of algorithms in various research areas. Moreover, MMEngine is also generic to be applied to non-OpenMMLab projects. Its highlights are as follows:
Integrate mainstream large-scale model training frameworks
Supports a variety of training strategies
Provides a user-friendly configuration system
Covers mainstream training monitoring platforms
Installation
Supported PyTorch Versions
Before installing MMEngine, please ensure that PyTorch has been successfully installed following the official guide.
Install MMEngine
Verify the installation
Get Started
Taking the training of a ResNet-50 model on the CIFAR-10 dataset as an example, we will use MMEngine to build a complete, configurable training and validation process in less than 80 lines of code.
Build Models
First, we need to define a model which 1) inherits from
BaseModel
and 2) accepts an additional argumentmode
in theforward
method, in addition to those arguments related to the dataset.mode
is “loss”, and theforward
method should return adict
containing the key “loss”.mode
is “predict”, and the forward method should return results containing both predictions and labels.Build Datasets
Next, we need to create Datasets and DataLoaders for training and validation. In this case, we simply use built-in datasets supported in TorchVision.
Build Metrics
To validate and test the model, we need to define a Metric called accuracy to evaluate the model. This metric needs to inherit from
BaseMetric
and implements theprocess
andcompute_metrics
methods.Build a Runner
Finally, we can construct a Runner with previously defined
Model
,DataLoader
, andMetrics
, with some other configs, as shown below.Launch Training
Learn More
Tutorials
Advanced tutorials
Examples
Common Usage
Design
Migration guide
Contributing
We appreciate all contributions to improve MMEngine. Please refer to CONTRIBUTING.md for the contributing guideline.
Citation
If you find this project useful in your research, please consider cite:
License
This project is released under the Apache 2.0 license.
Ecosystem
Projects in OpenMMLab