Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey
May 28th, 2024: Got accepted by KDD ‘24, along with our tutorial Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Tutorial.
Qijiong LIU*, Jieming ZHU*#, Yanting YANG, Quanyu DAI, Zhaocheng DU, Xiao-Ming WU, Zhou ZHAO, Rui ZHANG, Zhenhua DONG
*Equal contribution. Correspondence to: Jieming Zhu.
[Paper]
Abstract
Personalized recommendation serves as a ubiquitous channel for users to discover information or items tailored to their interests. However, traditional recommendation models primarily rely on unique IDs and categorical features for user-item matching, potentially overlooking the nuanced essence of raw item contents across multiple modalities such as text, image, audio, and video. This underutilization of multimodal data poses a limitation to recommender systems, especially in multimedia services like news, music, and short-video platforms. The recent advancements in pretrained multimodal models offer new opportunities and challenges in developing content-aware recommender systems. This survey seeks to provide a comprehensive exploration of the latest advancements and future trajectories in multimodal pretraining, adaptation, and generation techniques, as well as their applications to recommender systems. Furthermore, we discuss open challenges and opportunities for future research in this domain. We hope that this survey, along with our tutorial materials, will inspire further research efforts to advance this evolving landscape.
Citation
1 | @inproceedings{liu2024multimodal, |
Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey