Improvements for: Groupwise scaling along M for FP8 gemm (#2095)
- fix blockwise fp8 kernels
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- wip, < 128 not working
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- fix < 128
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- reduce diff
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- review comments
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- support partial n blocks
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
- fix build errors
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
Signed-off-by: Lucas Wilkinson lwilkinson@neuralmagic.com
CUTLASS 3.8.0
CUTLASS 3.8.0 - January 2025
CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. CUTLASS decomposes these “moving parts” into reusable, modular software components abstracted by C++ template classes. Primitives for different levels of a conceptual parallelization hierarchy can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.
To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for FP64, FP32, TF32, FP16, BF16, FP32 emulation via tensor core instruction, 8b floating point types (e5m2 and e4m3), block scaled data types (NVIDIA NVFP4 and OCP standard MXFP4, MXFP6, MXFP8), narrow integer types (4 and 8b signed and unsigned integers), and binary 1b data types (where architectures allow for the native support of such data types). CUTLASS demonstrates optimal matrix multiply operations targeting the programmable, high-throughput Tensor Cores implemented by NVIDIA’s Volta, Turing, Ampere, Ada, Hopper, and Blackwell architectures.
In addition to GEMMs, CUTLASS implements high-performance convolution via the implicit GEMM algorithm. Implicit GEMM is the formulation of a convolution operation as a GEMM thereby taking advantage of CUTLASS’s modular GEMM pipeline. This allows CUTLASS to build convolutions by reusing highly-optimized GEMM components.
See the Quick Start Guide to get started quickly.
See the functionality docs for a more comprehensive list of kernel level features, data types, instructions, and minimum supported by CUTLASS on each GPU architecture.
What’s New in CUTLASS 3.8
CUTLASS 3.8 is the first release that supports the NVIDIA Blackwell SM100 architecture. For a background on Blackwell’s new features, please consult the PTX documentation for CUDA 12.8.
tmem
across CuTe as a first class data locale.tmem->rmem
,rmem->tmem
andsmem->tmem data movement instructions
as copy atoms in CuTe.make_tmem_copy()
utility method to ease creation of tiled copies for tmem copy atoms.tmem
and full set of EVT fusions.use-cuda-graphs
to reduce overheads when benchmarking launch-bound kernels../cutlass_profiler --operation=GroupedGemm --help
for details).Note: CUTLASS 3.x builds are known to be down on Windows platforms for all CUDA toolkits. CUTLASS team is working on a fix.
See the CHANGELOG for details of all past releases and updates.
Performance
CUTLASS primitives are very efficient. When used to construct device-wide GEMM kernels, they exhibit nearly optimal utilization of peak theoretical throughput. The figure below shows CUTLASS 3.8’s performance as a % of theoretical peak utilization on various input and output data types when run on NVIDIA Blackwell SM100 architecture GPU.
The two figures below show the continual CUTLASS performance improvements on an NVIDIA H100 (NVIDIA Hopper architecture) since CUTLASS 3.1. CUTLASS 3.5.1 was compiled with the CUDA 12.5u1 Toolkit. Tensor Core operations are implemented using CUDA’s mma and wgmma instructions.
CuTe
CUTLASS 3.0 introduced a new core library, CuTe, to describe and manipulate tensors of threads and data. CuTe is a collection of C++ CUDA template abstractions for defining and operating on hierarchically multidimensional layouts of threads and data. CuTe provides
Layout
andTensor
objects that compactly package the type, shape, memory space, and layout of data, while performing the complicated indexing for the user. This lets programmers focus on the logical descriptions of their algorithms while CuTe does the mechanical bookkeeping for them. With these tools, we can quickly design, implement, and modify all dense linear algebra operations.The core abstractions of CuTe are hierarchically multidimensional layouts which can be composed with data arrays to represent tensors. The representation of layouts is powerful enough to represent nearly everything we need to implement efficient dense linear algebra. Layouts can also be combined and manipulated via functional composition, on which we build a large set of common operations such as tiling and partitioning.
CUTLASS 3.0 and beyond adopts CuTe throughout the GEMM hierarchy in its templates. This greatly simplifies the design and improves code composability and readability. More documentation specific to CuTe can be found in its dedicated documentation directory.
Compatibility
Minimum requirements:
CUTLASS requires a C++17 host compiler and performs best when built with the CUDA 12.8 Toolkit. It is also compatible with CUDA 11.4, CUDA 11.5, CUDA 11.6, CUDA 11.7, CUDA 11.8, and all other CUDA 12.x versions.
Operating Systems
We have tested the following environments.
Note: GCC 8.5.0 has known regressions regarding fold expressions and overloaded operators. Using GCC 7.5.0 or (preferred) GCC >= 9 is recommended.
Note: CUTLASS 3.x builds are known to be down on Windows platforms for all CUDA toolkits. CUTLASS team is working on a fix.
Hardware
CUTLASS runs successfully on the following NVIDIA GPUs, and it is expected to be efficient on Volta, Turing, Ampere, Ada, and Hopper architecture based NVIDIA GPUs.
Target Architecture
In general, PTX code generated for one target architecture can be run on future architectures (i.e., it is forward compatible). However, CUDA 12.0 introduced the concept of “architecture-accelerated features” whose PTX does not have forward compatibility guarantees. Several Hopper and Blackwell PTX instructions fall under this category of architecture-accelerated features, and thus require a
sm_90a
orsm100a
target architecture (note the “a” appended). For more details on this and other architecture-accelerated instructions, please refer to the CUDA Documentation.The target architecture information is passed on to CUTLASS via the cmake flag
CUTLASS_NVCC_ARCHS
. In order to maximize performance on Hopper GH100, users are required to build CUTLASS with90a
as the target architecture. If a user accidentally builds a kernel which uses SM90a features (e.g. Hopper Tensor Core Instructions), using the SM90 target (note the lack of “a”), with either CUDA Toolkit 12 or 11.8, the kernel is expected to fail with a runtime error.Or
Note: The NVIDIA Blackwell SM100 architecture used in the datacenter products has a different compute capability than the one underpinning NVIDIA Blackwell GeForce RTX 50 series GPUs. As a result, kernels compiled for Blackwell SM100 architecture with arch conditional features (using
sm100a
) are not compatible with RTX 50 series GPUs.Please refer to the functionality documentation for details on which kernels require which target architectures.
Documentation
CUTLASS is described in the following documents and the accompanying Doxygen documentation.
Resources
We have also described the structure of an efficient GEMM in our talk at the GPU Technology Conference 2018.
Building CUTLASS
CUTLASS is a header-only template library and does not need to be built to be used by other projects. Client applications should target CUTLASS’s
include/
directory in their include paths.CUTLASS unit tests, examples, and utilities can be build with CMake. The minimum version of CMake is given in the Quickstart guide. Make sure the
CUDACXX
environment variable points to NVCC in the CUDA Toolkit installed on your system.Create a build directory within the CUTLASS project, then run CMake. By default CUTLASS will build kernels for CUDA architecture versions 5.0, 6.0, 6.1, 7.0, 7.5, 8.0, 8.6, 8.9, and 9.0. To reduce compile time you can specify the architectures to build CUTLASS for by changing the CMake configuration setting
CUTLASS_NVCC_ARCHS
.From the
build/
directory, compile and run the CUTLASS unit tests by building the targettest_unit
with make.The unit tests are organized as several binaries mirroring the top-level namespaces of CUTLASS, and they may be executed in parallel via make’s
-j
command line argument.All tests should pass on supported platforms, though the exact number of tests may vary over time.
Project Structure
CUTLASS is arranged as a header-only library along with Utilities, Tools, Examples, and unit tests. Doxygen documentation provides a complete list of files, classes, and template concepts defined in the CUTLASS project.
A detailed explanation of the source code organization may be found in the CUTLASS documentation, but several main components are summarized below.
CUTLASS Template Library
CUTLASS SDK Examples
CUTLASS SDK examples apply CUTLASS templates to implement basic computations.
Tools
Test
The
test/unit/
directory consist of unit tests implemented with Google Test that demonstrate basic usage of Core API components and complete tests of the CUTLASS GEMM computations.Instructions for building and running the Unit tests are described in the Quickstart guide.
Performance Profiling
The
tools/profiler/
directory contains a command-line utility for launching each of the GEMM kernels. It can be built as follows:Building all GEMM and Convolution kernels (long build times)
By default, only one tile size is instantiated for each data type, math instruction, and layout. To instantiate all, set the following environment variable when running CMake from an empty
build/
directory. Beware, this results in tens of thousands of kernels and long build times. This would also result in a large binary size and on some platforms linker to fail on building the library. Therefore, it’s highly recommended to generate only a subset of kernels as demonstrated in the sub-section below.Building a subset of GEMM and Convolution kernels (reduced build times)
To compile strictly one kernel or a small set of kernels, a comma-delimited list of kernel names with wildcard characters may be used to reduce the set of kernels. The following examples show building exactly one or a subset of kernels for NVIDIA Ampere and Turing architecture:
Building a subset Tensor Core GEMM kernels
To compile a subset of Tensor Core GEMM kernels with FP32 accumulation and FP16 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:
Example command line for profiling a subset of Tensor Core GEMM kernels is as follows:
Building one CUDA Core GEMM kernel
To compile one SGEMM kernel targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:
Example command line for profiling single SGEMM CUDA kernel is as follows:
Building a subset of Tensor Core Convolution kernels
To compile a subset of Tensor core convolution kernels implementing forward propagation (fprop) with FP32 accumulation and FP16 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:
Example command line for profiling a subset of Tensor Core convolution kernels is as follows:
Building one Convolution CUDA kernel
To compile and run one CUDA Core convolution kernel implementing forward propagation (fprop) with F32 accumulation and FP32 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:
Example command line for profiling one CUDA Core convolution kernel:
More Details on Compiling CUTLASS Kernels and CUTLASS Profiler
About
CUTLASS is released by NVIDIA Corporation as Open Source software under the 3-clause “New” BSD license.
Contributors
The official list of CUTLASS developers and contributors is available here: CONTRIBUTORS.
Copyright
Copyright (c) 2017 - 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: BSD-3-Clause