Seven Years Later: Lessons Learned in Automated Assessment

Authors Bruno Pereira Cipriano , Pedro Alves



PDF
Thumbnail PDF

File

OASIcs.ICPEC.2024.3.pdf
  • Filesize: 0.61 MB
  • 14 pages

Document Identifiers

Author Details

Bruno Pereira Cipriano
  • COPELABS, Lusófona University, Lisbon, Portugal
Pedro Alves
  • COPELABS, Lusófona University, Lisbon, Portugal

Cite AsGet BibTex

Bruno Pereira Cipriano and Pedro Alves. Seven Years Later: Lessons Learned in Automated Assessment. In 5th International Computer Programming Education Conference (ICPEC 2024). Open Access Series in Informatics (OASIcs), Volume 122, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/OASIcs.ICPEC.2024.3

Abstract

Automatic assessment tools (AATs) are software systems used in teaching environments to automatically evaluate code written by students. We have been using such a system since 2017, in multiple courses and across multiple evaluation types. This paper presents a set of lessons learned from our experience of using said system. These recommendations should help other teachers and instructors who wish to use or already use AATs in creating assessments which give students useful feedback in terms of improving their work and reduce the likelihood of unfair evaluations.

Subject Classification

ACM Subject Classification
  • Applied computing → Computer-assisted instruction
Keywords
  • learning to program
  • automatic assessment tools
  • unit testing
  • feedback
  • large language models

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Bruno Pereira Cipriano. Towards the Integration of Large Language Models in an Object-Oriented Programming Course. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 2, ITiCSE 2024, pages 832-833, New York, NY, USA, 2024. Association for Computing Machinery. URL: https://doi.org/10.1145/3649405.3659473.
  2. Bruno Pereira Cipriano, Bernardo Baltazar, Nuno Fachada, Athanasios Vourvopoulos, and Pedro Alves. Bridging the Gap between Project-Oriented and Exercise-Oriented Automatic Assessment Tools. Computers, 13(7), 2024. URL: https://doi.org/10.3390/computers13070162.
  3. Bruno Pereira Cipriano, Nuno Fachada, and Pedro Alves. Drop project: An automatic assessment tool for programming assignments. SoftwareX, 18:101079, 2022. Google Scholar
  4. Stephen H Edwards and Krishnan Panamalai Murali. CodeWorkout: Short Programming E xercises with Built-in Data Collection. In Proceedings of the 2017 ACM conference on innovation and technology in computer science education, pages 188-193, 2017. Google Scholar
  5. Emma Enström, Gunnar Kreitz, Fredrik Niemelä, Pehr Söderman, and Viggo Kann. Five Years with Kattis — Using an Automated Assessment System in Teaching. In 2011 Frontiers in education conference (FIE), pages T3J-1. IEEE, 2011. Google Scholar
  6. Boni Garcia. Mastering Software Testing with JUnit 5: Comprehensive guide to develop high quality Java applications. Packt Publishing Ltd, 2017. Google Scholar
  7. Sklyer Grandel, Douglas C Schmidt, and Kevin Leach. Applying Large Language Models to Enhance the Assessment of Parallel Functional Programming Assignments. In Proceedings of the 2024 International Workshop on Large Language Models for Code, pages 1-9, 2024. Google Scholar
  8. Sarah Heckman and Jason King. Developing Software Engineering Skills using Real Tools for Automated Grading. In Proceedings of the 49th ACM technical symposium on computer science education, pages 794-799, 2018. Google Scholar
  9. Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pages 86-93, 2010. Google Scholar
  10. Nane Kratzke. Smart Like a Fox: How Clever Students Trick Dumb Automated Programming Assignment Assessment Systems. In CSEDU (2), pages 15-26, 2019. Google Scholar
  11. Sam Lau and Philip Guo. From "Ban it till we understand it" to "Resistance is futile": How university programming instructors plan to adapt as more students use AI code generation and explanation tools such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research-Volume 1, pages 106-121, 2023. Google Scholar
  12. Yue Li, Tian Tan, and Jingling Xue. Understanding and Analyzing Java Reflection. ACM Transactions on Software Engineering and Methodology (TOSEM), 28(2):1-50, 2019. Google Scholar
  13. José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. Automated assessment in computer science education: A state-of-the-art review. ACM Transactions on Computing Education (TOCE), 22(3):1-40, 2022. Google Scholar
  14. Maciej Pankiewicz and Ryan S Baker. Large Language Models (GPT) for automating feedback on programming assignments. arXiv preprint, 2023. URL: https://arxiv.org/abs/2307.00150.
  15. Valerie J Shute. Focus on Formative Feedback. Review of educational research, 78(1):153-189, 2008. Google Scholar
  16. Igor Škorić, Tihomir Orehovački, and Marina Ivašić-Kos. Exploring the Acceptance of the Web-based Coding Tool in an Introductory Programming course: A pilot Study. In Human Interaction, Emerging Technologies and Future Applications III: Proceedings of the 3rd International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET 2020), August 27-29, 2020, Paris, France, pages 42-48. Springer, 2021. Google Scholar
  17. Jagadeeswaran Thangaraj, Monica Ward, and Fiona O’Riordan. A Systematic Review of Formative Assessment to Support Students Learning Computer Programming. In 4th International Computer Programming Education Conference (ICPEC 2023). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2023. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail