Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey
Discrete Semantic Tokenization for Deep CTR Prediction
Lightweight Modality Adaptation to Sequential Recommendation via Correlation Supervision
Learning Category Trees for ID-Based Recommendation: Exploring the Power of Differentiable Vector Quantization
ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models

ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models

Aug. 25th, 2024: Got introduced on the Workshop on Generative AI for Recommender Systems and Personalization!

Oct. 20th, 2023: Got accepted by WSDM ‘24. Our first version (i.e., GENRE) discussed the use of closed-source LLMs (e.g., GPT-3.5) in recommender systems, while this version (i.e., ONCE) further combines open-source LLMs (e.g., LLaMA) and closed-source LLMs in recommender systems.

Qijiong LIU, Nuo CHEN, Tetsuya SAKAI, and Xiao-Ming WU#
[Code] [Paper]

Read more
Benchmarking News Recommendation in the Era of Green AI

Benchmarking News Recommendation in the Era of Green AI

Mar. 6th, 2024: Got accepted by TheWebConf ‘24. It is a revised and condensed version of the previous work titled Only Encode Once: Making Content-based News Recommender Greener. While the core ideas and results remain consistent, the presentation scope have been modified for brevity and clarity. For the full details and extended discussions, please refer to the original long paper at here.

Qijiong LIU*, Jieming ZHU*, Quanyu DAI, Xiao-Ming WU#
*Equal contribution (co-first authors).
[Code] [Paper]

Read more
A First Look at LLM-powered Generative News Recommendation