Garcia-Dominguez, Antonio orcid.org/0000-0002-4744-9150 and Beaumont, Tony (2025) Experiences with test-driven elaborated feedback for teaching introductory programming. Computer Science Education. ISSN: 1744-5175
Abstract
Background and Context: Computer Science students learning how to code need timely and effective feedback. Delivering such feedback can be challenging in large cohorts with diverse starting points and personal circumstances, given the available resources. Objective: Automated test-driven feedback specific to each assignment can provide rapid feedback on common mistakes, leaving more time for instructors to provide individualized assistance. Method: We developed AutoFeedback, an open-source system combining instructor-designed test suites and feedback templates to deliver immediate and task-specific elaborated feedback on interim work during practical laboratories. AutoFeedback was used in two editions of the first-year programming module at Aston University. Data on student engagement, performance, and reception was collected and analysed through descriptive statistics, Mann-Whitney/Kendall tests, and bottom-up thematic analysis. Findings: Students engaged more consistently, liked receiving immediate feedback and a clear indication of progress, and requested its integration in other modules. Statistical tests showed that interim work improved with further submission attempts in response to feedback. Some students did not know how to respond to occasional terse feedback, or had tests fail due to irrelevant differences, or pass despite of mistakes. Implications: Automated test suites can provide interim feedback, in addition to grading, by driving test results and program outputs through feedback templates tailored by instructors. To be effective, instructors must monitor the student experience, refining tests and feedback templates to address common mistakes and clarify explanations. These refinements must be communicated to students, so they feel supported.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | Publisher Copyright: © 2025 Informa UK Limited, trading as Taylor & Francis Group. This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy. |
| Keywords: | containerization,static analysis,test-driven development,testing education,Unit testing |
| Dates: |
|
| Institution: | The University of York |
| Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
| Date Deposited: | 23 Oct 2025 15:00 |
| Last Modified: | 23 Oct 2025 15:00 |
| Published Version: | https://doi.org/10.1080/08993408.2025.2554497 |
| Status: | Published online |
| Refereed: | Yes |
| Identification Number: | 10.1080/08993408.2025.2554497 |
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:233445 |
Download
Filename: AutoFeedback_experiences_paper_v2_.pdf
Description: AutoFeedback-experiences-paper
Licence: CC-BY 2.5

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)