Contrabert: Enhancing code pre-trained models via contrastive learning
2023 IEEE/ACM 45th International Conference on Software …, 2023•ieeexplore.ieee.org
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned
widespread attention from both academia and industry. Attributed to the superior ability in
code representation, they have been further applied in multiple downstream tasks such as
clone detection, code search and code translation. However, it is also observed that these
state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance
of these pre-trained models drops significantly with simple perturbations such as renaming …
widespread attention from both academia and industry. Attributed to the superior ability in
code representation, they have been further applied in multiple downstream tasks such as
clone detection, code search and code translation. However, it is also observed that these
state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance
of these pre-trained models drops significantly with simple perturbations such as renaming …
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive ex-periments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks.
ieeexplore.ieee.org
Showing the best result for this search. See all results