DAIBench (DiDi Cloud AI Benchmark) aims to provide a set of GPU evaluation sets for AI production environments, spanning different types of GPU servers and cloud environments, to provide users with effective and credible test results for future hardware selection , Software and library optimization, business model improvement, link stress testing and other stages to lay a solid data foundation and technical reference.
Supported Features
Evaluation from Hardware to Application
Foreign BENCHMARK adaptation
Impact of virtualized cloud environment scenarios and business pertinence
General Structure
DAIBench comprehensively considers the existing GPU performance testing tools, and divides the indicators into hardware layer, framework (operator) layer, and algorithm layer.
For each level, DAIBench currently supports the following tests:
Layer
Supported Test
Hardware layer
Focusing on the indicators of the hardware itself, such as peak computing throughput (TFLOPS/TOPS) calculation indicators and memory access bandwidth, PCIe communication bandwidth and other I/O indicators.
Frame/operator layer
Evaluating the computing power of commonly used operators (convolution, Softmax, matrix multiplication, etc.) based on mainstream AI frameworks.
Model layer
Performing end-to-end evaluation by selecting models in a series of production tasks.
Getting started
Hardware Layer
cd <test_folder>
bash install.sh
bash run.sh
For GPU test, please install suitable nvidia-driver and cuda first.
DAIBench
DAIBench (
DiDi Cloud AI Benchmark
) aims to provide a set of GPU evaluation sets for AI production environments, spanning different types of GPU servers and cloud environments, to provide users with effective and credible test results for future hardware selection , Software and library optimization, business model improvement, link stress testing and other stages to lay a solid data foundation and technical reference.Supported Features
General Structure
DAIBench comprehensively considers the existing GPU performance testing tools, and divides the indicators into hardware layer, framework (operator) layer, and algorithm layer.
For each level, DAIBench currently supports the following tests:
Getting started
Hardware Layer
For GPU test, please install suitable
nvidia-driver
andcuda
first.Operator Layer
Current operator layer is using DeepBench
To run GEMM, convolution, recurrent op and sparse GEMM benchmarks:
To execute the NCCL single All-Reduce benchmark:
The NCCL MPI All-Reduce benchmark can be run using mpirun as shown below:
num_ranks cannot be greater than the number of GPUs in the system.
Model layer
docker
andnvidia-docker
is required for model testing. To run specific model, please readReadme.md
in the folder.General test procedure:
Developer guide
See
wiki
for guidelines.Contributing
Welcome to contribute by creating issues or sending pull requests. See
Contributing Guide
for guidelines.License
DAIBench is licensed under the
Apache License 2.0
.