![]() |
| |||||||||||||
FedGenAI-IJCAI 2025 : International Workshop on Federated Learning with Generative AI In Conjunction with IJCAI 2025 | |||||||||||||
Link: https://federated-learning.org/FedGenAI-ijcai-2025/ | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
[Call for Papers]
Generative AI (GenAI), particularly large language models (LLMs) like ChatGPT, has demonstrated transformative potential across diverse domains. However, deploying these models in real-world applications presents critical challenges in distributed model management, including data privacy, efficiency, and scalability. Training foundation models (FMs) is inherently data- and resource-intensive, traditionally relying on centralized methods that conflict with privacy regulations and real-world constraints. In decentralized settings, organizations must navigate fragmented training data, high computational demands, and stringent regulatory frameworks (e.g., GDPR) that limit data sharing. Federated Learning (FL) offers a compelling solution by enabling collaborative learning across distributed data sources while preserving privacy. As GenAI continues to reshape AI applications, FL is becoming increasingly essential for ensuring secure, scalable, and decentralized AI development. By allowing data owners to collaboratively train models without sharing raw data, Federated Generative AI (FedGenAI) bridges the gap between the power of foundation models and the need for privacy-preserving, distributed learning. Advancements in FL methodologies tailored for GenAI can unlock new opportunities for efficient model training, personalized adaptation, and responsible AI deployment while mitigating privacy risks and computational constraints. Foundation models like GPT-4, with their vast knowledge and emergent capabilities, have achieved remarkable success in natural language processing and computer vision. However, fully leveraging their potential in decentralized environments requires addressing challenges such as limited computing resources, data privacy concerns, model heterogeneity, and proprietary ownership. Federated Transfer Learning (FTL)—the integration of FL and transfer learning—offers promising solutions by enabling efficient model adaptation without compromising data privacy. The concept of FTL-FM, which applies FTL to foundation models, has gained significant traction in both academia and industry. As the intersection of federated learning and generative AI (FedGenAI) remains underexplored, this workshop aims to fill that gap. We invite original research contributions, position papers, and work-in-progress reports to advance our understanding of privacy-preserving, scalable, and decentralized generative AI. By bringing together researchers, students, and industry professionals, FedGenAI provides a unique platform to discuss the latest advancements, share insights, and shape the future of collaborative, privacy-conscious AI development. This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area of FedGenAI. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with GenAI to open up opportunities for addressing new challenges. The topics of interest include but are not limited to the following: Theory and algorithmic foundations: -Impact of heterogeneity in FL of GenAI -Multi-stage model training (e.g., base model + fine tuning) -Optimization advances in FL, e.g., beyond first-order and local methods -Prompt tuning and design in federated settings -Self-supervised learning in federated settings -Federated in-context learning -Federated neuro-symbolic learning Leveraging foundation models to improve federated learning: -Adaptive aggregation strategies for FL in heterogeneous environments -GenAI enhanced FL knowledge distillation -Overcoming data interoperability challenges using GenAI -Personalization of FL with GenAI Federated learning for training and tuning foundation models: -Fairness, bias, and interpretability challenges in FL with foundation models -Federated transfer learning with GenAI -FL-empowered multi-agent foundation model systems -FL techniques for training large-scale foundation models -Hardware for FL with foundation models -Optimization algorithms for federated training of foundation models -Privacy-preserving mechanisms in FL with foundation models -Resource-efficient FL with foundation models -Security and robustness considerations in FL with foundation models -Systems and infrastructure for FL with foundation models -Vertical federated learning with GenAI -Vulnerabilities of FL with GenAI [Submission Instructions] Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'25 template (https://www.ijcai.org/authors_kit). All submitted papers will be under a double-blind peer review for their novelty, technical quality and impact. The submissions must not contain author details. Submission will be accepted via the Easychair submission website. Based on the requirement from IJCAI'25, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden. Easychair submission site: https://easychair.org/conferences/?conf=fedgenai-ijcai-25 For enquiries, please email to: fedgenai-ijcai-25@easychair.org [Co-Chairs] -Jindong Wang (William & Mary) -Xiaohu Wu (BUPT) -Lingjuan Lyu (Sony AI) -Dimitrios Dimitriadis (Amazon) -Xiaoxiao Li (UBC) -Han Yu (NTU) |
|