In the past couple of years, we have witnessed the emergence of novel large language models (LLM) reaching breakthrough performance on NLP tasks. These include ChatGPT and Galactica, which are AI assistants that can produce long and good quality content that can be seeded for authors’ work. Because of their recent emergence, the norms around the use of such technology is not fully established, yet. Hence, it is important to acknowledge its use and elaborate on how it has been employed.
Specifically, we define three levels of AI assistance usage: insignificant, quotation and substantial. We will group the different use cases according to these three categories and we will define CEUR-WS stance.
Insignificant. Activities like: i) paraphrasing and refining the manuscript content (using Grammarly or other spell checkers), and ii) smart composition (via predictive keyboards) are widely accepted and do not need any acknowledgement. Tool support is allowed if the changes are not adding to or removing content but rather corrects or improves the grammar and spelling of the human-authored text. The paraphrasing/grammar correction should not be applied to whole paragraphs or pages, but rather be applied in an interactive way, sentence for sentence. Do not use Generative AI to generate or transcribe the author's paper text.
Quotation. AI-generated content such as text and code can be included, if it is rendered by a quotation mark-up and by indicating the AI source of the content as a caption of the generated content. Such quotations must be clearly distinguishable from human-authored elements of the paper.
Substantial. Using AI assistants for generating text content including the elaboration of ideas, algorithms, definitions of terms and summaries of other papers is not acceptable. Such generated content would be derived by the AI tool from existing work. Potential issues with such practice are related to originality, plagiarism, ownership, and authorship, whose consequences and impact are not yet clear.
Regardless of the cases above, CEUR-WS publishes original work from named authors (human beings), and thus contributions from AI assistants can only be stated in the acknowledgements and/or by suitable references at the original research papers. We require that all authors and workshop editors adhere to these guidelines. Their violation will lead to the removal of the published paper or the whole volume, similar to our procedures dealing with plagiarism. As a consequence, AI assistants or any other computer programs cannot be named as authors of a paper.
As this technology is in current development, we plan to continuously review this policy in the upcoming months.
This policy section is partly inspired by the “ACL 2023 Policy on AI Writing Assistance” available here, and "ACM Policy on Authorship" available here.
Related documents: