×
Jul 13, 2024 · We dissected weakness distributions in both larger and smaller models, applying an extensive methodology that encompasses model-specific as well ...
Jul 17, 2024 · The model has a tendency to output nearly identical solutions, repeating them up to 18 times until reaching the maximum output length defined by ...
We assess the quality of generated code using match-based and execution-based metrics, then conduct thematic analysis to develop a taxonomy of nine types of ...
Sep 13, 2024 · We dissected weakness distributions in both larger and smaller models, applying an extensive methodology that encompasses model-specific as well ...
Code generation, the task of producing source code from prompts, has seen significant advancements with the advent of pre-trained large language models ...
People also ask
Abstract/Point Japanese summary of the article(about several hundred characters). All summary is available on JDreamIII(charged).
To bridge this gap, we conduct a systematic study on analyzing the weaknesses based on three state-of-the-art LLMs across three widely-used code generation ...
Publications (24) ; Deep learning for code generation: a survey · August 2024 · 10 Reads · Huangzhao Zhang ; Uncovering Weaknesses in Neural Code Generation · July ...
Uncovering Weaknesses in Neural Code Generation. 2024. close. People. Lian, Xiaoli · Wang, Shuaisong · Ma, Jieping · Liu, Fang · Tan, Xin · Zhang, Li · Shi, Lin ...
We first show that existing models are vulnerable to data-poisoning-based backdoor attacks. We then introduce a simple yet effective attack on neural code ...