Cublaslt Grouped Gemm [repack] -
If you're building a transformer-based model, a recommender system, or any application that requires many small, independent matrix multiplications, Grouped GEMM should be your default choice. As NVIDIA continues to optimize cuBLASLt for Hopper and future architectures, the performance gap between irregular and regular workloads will only shrink further. For implementation details, refer to the NVIDIA cuBLASLt Developer Guide (CUDA 12.x and later).
cublasLtMatmulDesc_t matmulDesc; cublasLtMatmulDescCreate(&matmulDesc, CUDA_R_32F, CUDA_R_16F); cublaslt grouped gemm
float alpha = 1.0f, beta = 0.0f; cublasLtMatmulGrouped(handle, nullptr, matmulDesc, &alpha, &beta, (void**)A_ptrs, (void**)B_ptrs, (void**)C_ptrs, (void**)C_ptrs, groupCount, groupPlans); cuBLASLt Grouped GEMM represents a paradigm shift for batched linear algebra on GPUs. It acknowledges that real-world workloads are irregular, heterogeneous, and dynamic. By moving the complexity of scheduling and fusing into the library, it allows developers to write clean, expressive code that still achieves near-peak hardware performance. If you're building a transformer-based model, a recommender