LPFS: Learnable Polarizing Feature Selection for Click-Through Rate Prediction

Y Guo, Z Liu, J Tan, C Liao, S Yang, L Yuan… - arXiv preprint arXiv …, 2022 - arxiv.org
Y Guo, Z Liu, J Tan, C Liao, S Yang, L Yuan, D Kong, Z Chen, J Liu
arXiv preprint arXiv:2206.00267, 2022arxiv.org
In industry, feature selection is a standard but necessary step to search for an optimal set of
informative feature fields for efficient and effective training of deep Click-Through Rate (CTR)
models. Most previous works measure the importance of feature fields by using their
corresponding continuous weights from the model, then remove the feature fields with small
weight values. However, removing many features that correspond to small but not exact zero
weights will inevitably hurt model performance and not be friendly to hot-start model training …
In industry, feature selection is a standard but necessary step to search for an optimal set of informative feature fields for efficient and effective training of deep Click-Through Rate (CTR) models. Most previous works measure the importance of feature fields by using their corresponding continuous weights from the model, then remove the feature fields with small weight values. However, removing many features that correspond to small but not exact zero weights will inevitably hurt model performance and not be friendly to hot-start model training. There is also no theoretical guarantee that the magnitude of weights can represent the importance, thus possibly leading to sub-optimal results if using these methods. To tackle this problem, we propose a novel Learnable Polarizing Feature Selection (LPFS) method using a smoothed- function in literature. Furthermore, we extend LPFS to LPFS++ by our newly designed smoothed--liked function to select a more informative subset of features. LPFS and LPFS++ can be used as gates inserted at the input of the deep network to control the active and inactive state of each feature. When training is finished, some gates are exact zero, while others are around one, which is particularly favored by the practical hot-start training in the industry, due to no damage to the model performance before and after removing the features corresponding to exact-zero gates. Experiments show that our methods outperform others by a clear margin, and have achieved great A/B test results in KuaiShou Technology.
arxiv.org
Showing the best result for this search. See all results