Integrating Large Language Models into Recommender Systems: Toward Context-Aware Personalization, Dynamic Profiling, and Explainability
PhD Dissertation Proposal by: Danial Ebrat
Date: Wednesday, November 12, 2025
Time: 11:00 AM
Location: Essex Hall, Room 122
Recommender systems play a critical role in personalizing digital experiences, yet they continue to face challenges related to data sparsity, dynamic user preferences, and model interpretability. The recent emergence of large language models (LLMs) provides new opportunities to address these limitations by capturing semantic context, generating adaptive representations, and facilitating human-understandable explanations. This presentation explores the integration of LLMs into classical and hybrid recommender system paradigms to create more context-aware, dynamic, and explainable recommendation pipelines. Specifically, it investigates methods for constructing evolving user profiles through textual understanding, designing vector-based embeddings that unify user and item semantics, and developing customized loss functions that better align latent representations with user intent. The proposed work also examines mechanisms for improving transparency and explainability in recommendation outcomes, contributing to user trust and ethical model deployment. By bridging traditional recommendation techniques with LLM-driven contextualization, this proposal aims to advance the next generation of intelligent, adaptive, and interpretable recommender systems.
Recommender Systems, User Simulation, Large Language Models, Vector Embeddings, Explainability
External Reader: Dr. Esraa Abdelhalim
Internal Reader: Dr. Jianguo Lu
Internal Reader: Dr. Ikjot Saini
Advisor(s): Dr. Luis Rueda