| |||||||||||||||
Multimodal KDD 2023 : International Workshop on Multimodal Learning - 2023 Theme: Multimodal Learning with Foundation Models (Jointly with the SIGKDD’23) | |||||||||||||||
Link: https://multimodal-kdd-2023.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
[Call for Paper] Multimodal KDD 2023 - International Workshop on Multimodal LearningInternational Workshop on Multimodal Learning
- 2023 Theme: Multimodal Learning with Foundation Models (Jointly with the SIGKDD’23) Webpage: https://multimodal-kdd-2023.github.io/ Call for Paper Our workshop will provide a platform to discuss the latest advances and trends in theory, methodologies, and applications in the field of multimodal learning. The workshop theme for this year will be on the use of foundation models. These foundation models, such as BERT, T5, LLaMA and GPT-4 which were trained on massive data collections, have significantly revolutionized the field of natural language processing (NLP). The use of such foundation models for solving several NLP tasks represent a fundamental paradigm shift in the way several problems are being solved especially due to their ability to integrate knowledge from other domains such as computer vision (DALL-E, CLIP), retrieval, knowledge graphs and more. Moreover, foundation models have brought some fundamental changes to the multimodal problem setting, especially when integrating text or images with graphs, time-series, and other forms of structured data. As such, the workshop aims to focus on utilizing these foundation models and integrating multiple modalities. Though the workshop might also include discussions and papers about general multimodal learning problems, more emphasis will be given to the works that utilize recently developed foundation models. Our goal will be to explore and showcase the innovative ways in which multimodal learning and data fusion can be employed, with a particular emphasis on how to leverage the capabilities of foundation models for these purposes. The workshop topics include, but are not limited to: - Multimodal data generation - Multimodal data preprocessing and feature engineering - Multimodal data fusion - Multimodal self-supervised and/or unsupervised learning - Multimodal learning with noisy data - Representation learning for multimodal data - Multimodal transfer learning - Multimodal zero shot learning with foundation models - Biases in multimodal learning - Explainable multimodal learning - Multimodal generative AI - Trust-worthy multimodal learning - Large-scale multimodal learning - Responsible multimodal learning - Applications of Multimodal learning (e.g., finance, healthcare, social media, climate, etc.) Important Dates: June 9th, 2023, Paper Submission July 10th, 2023, Paper Acceptance Notification July 24th, 2023, Camera-Ready Submission August 7th, 2023, Workshop Date Submission Guidelines: https://multimodal-kdd-2023.github.io/#guidelines Organizers: Yuan Ling (Amazon), Fanyou Wu (Amazon), Shujing Dong (Amazon), Yarong Feng (Amazon), George Karypis (Univ. of Minnesota / Amazon), Chandan K Reddy (Virginia Tech / Amazon) Thank you, Multimodal KDD 2023 kdd2023-ws-multimodal@amazon.com |
|