📝 Publications

REMEDY: Recipe Merging Dynamics in Large Vision-Language Models
.
Didi Zhu, Yibing Song, Tao Shen, Ziyu Zhao, Jinluan Yang, Min Zhang, Chao Wu
- First exploration of the LoRA fusion problem in Multimodal Large Language Models
- Proposing a dynamic fusion scheme enhances zero-shot generalization capability of MLLMs.

Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models.
Didi Zhu, Zhongyisun Sun, Zexi Li, Tao Shen, Ke Yan, Shouhong Ding, Chao Wu, Kun Kuang
- Pioneered the first comprehensive exploration and revelation of catastrophic forgetting in MLLMs such as InstructBLIP and LLaVa.
- Addressed the issue through an innovative training-free model grafting technique.

Neural Collapse Anchored Prompt Tuning for Generalizable Vision-Language Models.
Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Jiashuo Liu, Kun Kuang, Chao Wu
- The first exploration of large vision-language models through the lens of neural collapse in deep learning theory.
- Tackle class imbalance in generalization tasks for large vision-language models by leveraging neural collapse theory.

Universal domain adaptation via compressive attention matching.
Didi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, Chao Wu
- Addressed the issue of inconsistent source-target label spaces in Universal Domain Adaptation directly using self-attention in ViT.

Generalized Universal Domain Adaptation with Generative Flow Networks.
Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu
- Introduced a comprehensive problem called Generalized Universal Domain Adaptation, achieving a unification of all Domain Adaptation sub-problems involving label heterogeneity.
- Implemented an exploration-aware active learning strategy based on Generative Flow Networks to effectively address GUDA.
ICML 2025
ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think, Tao Feng, Wei Li, DiDi Zhu, Hangjie Yuan, Wendi Zheng, Dan Zhang, Jie Tang.ICML 2025
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-Tuning, Wenke Huang, Jian Liang, Zekun Shi, Didi Zhu, Guancheng Wan, He Li, Bo Du, Dacheng Tao, Mang Ye.ICML 2025
Be Confident: Uncovering Overfitting in MLLM Multi-Task Tuning, Wenke Huang, Jian Liang, Guancheng Wan, Didi Zhu, He Li, Jiawei Shao, Mang Ye, Bo Du, Dacheng Tao.ICML 2025
ERICT: Enhancing Robustness by Identifying Concept Tokens in Zero-Shot Vision Language Models, Xinpeng Dong, Min Zhang, Didi Zhu, Ye Jun Jian, Zhang Keli, Aimin Zhou, Fei Wu, Kun Kuang.ICLR 2025
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace,Jinluan Yang, Anke Tang, Didi Zhu, Zhengyu Chen, Li Shen, Fei Wu.ICLR 2025
Merging loras like playing lego: Pushing the modularity of lora to extremes through rank-wise clustering, Ziyu Zhao, Tao Shen, Didi Zhu, Zexi Li, Jing Su, Xuwu Wang, Kun Kuang, Fei Wu.NeurIPS 2024 Workshop
Improving Group Connectivity for Generalization of Federated Deep Learning, Zexi Li, Jie Lin, Zhiqi Li, Didi Zhu, Rui Ye, Tao Shen, Tao Lin, Chao Wu.KDD 2023
Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization., Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, Kun Kuang.IEEE Transactions on Big Data
Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching., Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, Chao Wu.IJCAI 2022 Workshop
Ensemble federated adversarial training with non-iid data., Shuang Luo, Didi Zhu, Zexi Li, Chao Wu.