Communication-Efficient and Personalized Federated LLM Fine-Tuning via Tri-Matrix Adaptation
Published in To be published | 待发表, 2025
本文提出了一种基于三矩阵自适应的联邦学习框架,用于高效个性化的大语言模型微调。关键技术包括:
- 三矩阵参数分解方法,显著降低通信开销达60%
- 个性化适配模块,在保护数据隐私的同时提升模型性能
- 跨设备高效协同训练框架,支持大规模分布式部署
- 在医疗和金融等垂直领域的应用验证
This paper presents a tri-matrix adaptation framework for efficient personalized federated LLM fine-tuning. Key technologies include:
- Tri-matrix parameter decomposition reducing communication costs by 60%
- Personalized adaptation module enhancing performance while preserving privacy
- Cross-device efficient collaborative training supporting large-scale deployment
- Validation in vertical domains like healthcare and finance
研究结果表明,该框架在保持模型性能的同时显著降低了联邦学习中的通信开销,为数据隐私要求高的场景提供了有效解决方案。
Results show this framework significantly reduces communication overhead in federated learning while maintaining performance, providing effective solutions for privacy-sensitive scenarios.
Recommended citation: Yongle Li, Bo Liu, Sheng Huang, Zheng Zhang, Xiaotong Yuan, Richang Hong. (2024). "Communication-Efficient and Personalized Federated LLM Fine-Tuning via Tri-Matrix Adaptation"
Download Paper