Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank

D Loveland, X Wu, T Zhao, D Koutra, N Shah… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2410.23300, 2024arxiv.org
Collaborative Filtering (CF) methods dominate real-world recommender systems given their
ability to learn high-quality, sparse ID-embedding tables that effectively capture user
preferences. These tables scale linearly with the number of users and items, and are trained
to ensure high similarity between embeddings of interacted user-item pairs, while
maintaining low similarity for non-interacted pairs. Despite their high performance,
encouraging dispersion for non-interacted pairs necessitates expensive regularization (eg …
Collaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank, a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 66%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective, their optimization of stable rank, motivating a flexible regularization method.
arxiv.org
Showing the best result for this search. See all results