MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators

1University of Illinois Urbana-Champaign 2Work done while intern at UIUC *Equal Contributors

News

  • 🎉 2025-4-4 The Arxiv version is available at link.
  • 🎉 2025-4-1 MiLo has successfully passed the Artifact Evaluation of MLSys 2025.
  • 🚀 2025-3-29 MiLo is open-sourced at link
  • 🎉 2025-2-13 MiLo is accepted by MLSys 2025.

Abstract

A critical approach for efficiently deploying Mixture-of-Experts (MoE) models with massive parameters is quantization. However, state-of-the-art MoE models suffer from non-negligible accuracy loss with extreme quantization, such as under 4 bits. To address this, we introduce MiLo, a novel method that augments highly quantized MoEs with a mixture of low-rank compensators. These compensators consume only a small amount of additional memory but significantly recover accuracy loss from extreme quantization. MiLo also identifies that MoE models exhibit distinctive characteristics across weights due to their hybrid dense-sparse architectures, and employs adaptive rank selection policies along with iterative optimizations to close the accuracy gap. MiLo does not rely on calibration data, allowing it to generalize to different MoE models and datasets without overfitting to a calibration set. To avoid the hardware inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit quantized MoE models. Our evaluation shows that MiLo outperforms existing methods on SoTA MoE models across various tasks.

Method

  • MiLo introduces Low-rank Compensators (LoRC) to compensate the quantization error with a minimal additional memory overhead. The quantization and LoRC are complementary to each other through an iterative optimization process, which requires no calibration dataset, leading to a fast and efficient algorithm. MiLo allows a bespoke LoRC rank selecting strategy, and this features makes MiLo adaptive to memory constraint while keeping a high quantization quality.
  • MiLo.
  • MiLo explores the rank sensitivity of the LoRC from two perspectives: model structure and weight statistical index, to guide the rank selection. To be specific, MiLo reveals that dense layers, including shared experts and self-attention layers, deserves more rank. MiLo also reveals a positive correlation between Kurtosis value of the weight and quantization error, and naturally lead to the set a high rank for weight with high Kurotsis value.
  • Kurtosis Value and Quantization Error.
  • MiLo also introduces effective INT3 CUDA kernel to accelerate the inference, which fills the gap in this area. It incooperates with memory-efficient bitpacking, efficient INT3-to-FP16 dequantization, and optimized matrix manipulation pipeline.
  • MiLo Kernel.

Evaluation

We perform MiLo on popular open-source MoE models, including Mixtral-8x7B and DeepSeek-MoE, and evaulate the performance on a wide range of benchmarks. Our evaluation shows that MiLo preserves 87% of Wikitext2 perplexity and 97% of zero-shot accuracy with 22% compression ratio. MiLo also outperforms other quantization methods on both quantization quality and quantization speed, due to the adaptive LoRC and optimization based algorithm.

MiLo Kernel.

BibTeX

@article{huang2025milo,
  title={MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators},
  author={Huang, Beichen and Yuan, Yueming and Shao, Zelei and Zhang, Minjia},
  journal={arXiv preprint arXiv:2504.02658},
  year={2025}
}