Abstract
Mixture-of-Experts (MoE) has emerged as an effective and efficient scaling mechanism for large language models (LLMs) and vision-language models (VLMs). By expanding a single feed-forward network into multiple expert branches, MoE increases model capacity while maintaining efficiency through sparse activation. However, despite this sparsity, the need to preload all experts into memory and activate multiple experts per input introduces significant computational and memory overhead. The expert module becomes the dominant contributor to model size and inference cost, posing a major challenge for deployment. To address this, we propose MC# (Mixture-Compressor-sharp), a unified framework that combines static quantization and dynamic expert pruning by leveraging the significance of both experts and tokens to achieve aggressive compression of MoE-LLMs/VLMs. To reduce storage and loading overhead, we introduce Pre-Loading Mixed-Precision Quantization (PMQ), which formulates adaptive bit allocation as a linear programming problem. The objective function jointly considers expert importance and quantization error, producing a Pareto-optimal trade-off between model size and performance. To reduce runtime computation, we further introduce Online Top-any Pruning (OTP), which models expert activation per token as a learnable distribution via Gumbel-Softmax sampling. During inference, OTP dynamically selects a subset of experts for each token, allowing fine-grained control over activation. By combining PMQ's static bit-width optimization with OTP's dynamic routing, MC# achieves extreme compression with minimal accuracy degradation. On DeepSeek-VL2, MC# achieves a 6.2× weight reduction at an average of 2.57 bits, with only a 1.7% drop across five multimodal benchmarks compared to the 16-bit baseline. Moreover, OTP further reduces expert activation by 20% with less than 1% performance loss, demonstrating strong potential for efficient deployment of MoE-based models.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
| DOIs | |
| State | Accepted/In press - 2026 |
Keywords
- Mixture-of-Expert
- model compression
- multimodal large language model
- pruning
- quantization
Fingerprint
Dive into the research topics of 'MC#: Mixture Compressor for Mixture-of-Experts Large Models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver