A critical approach for efficiently deploying Mixture-of-Experts (MoE) models with massive parameters is quantization. However, state-of-the-art MoE models suffer from non-negligible accuracy loss with extreme quantization, such as under 4 bits. To address this, we introduce MiLo, a novel method that augments highly quantized MoEs with a mixture of low-rank compensators. These compensators consume only a small amount of additional memory but significantly recover accuracy loss from extreme quantization. MiLo also identifies that MoE models exhibit distinctive characteristics across weights due to their hybrid dense-sparse architectures, and employs adaptive rank selection policies along with iterative optimizations to close the accuracy gap. MiLo does not rely on calibration data, allowing it to generalize to different MoE models and datasets without overfitting to a calibration set. To avoid the hardware inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit quantized MoE models. Our evaluation shows that MiLo outperforms existing methods on SoTA MoE models across various tasks.
We perform MiLo on popular open-source MoE models, including Mixtral-8x7B and DeepSeek-MoE, and evaulate the performance on a wide range of benchmarks. Our evaluation shows that MiLo preserves 87% of Wikitext2 perplexity and 97% of zero-shot accuracy with 22% compression ratio. MiLo also outperforms other quantization methods on both quantization quality and quantization speed, due to the adaptive LoRC and optimization based algorithm.
@article{huang2025milo,
title={MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators},
author={Huang, Beichen and Yuan, Yueming and Shao, Zelei and Zhang, Minjia},
journal={arXiv preprint arXiv:2504.02658},
year={2025}
}