The Marker’s Apprentice was a marking system used at the University of Nottingham, developed by myself and my supervisor (Dr Colin Higgins). It was responsible for assessing several thousand student coursework solutions in both the UK and China. It allows the management of courseworks and electronic submission via a webpage. The webpage communicates with a back-end server using Java RMI and reads a pre-defined set of marking ‘tools’. Each tool has it’s own configuration and is responsible for measuring functional correctness, code style (Using PMD), structure (Custom PMD rulesets). It provides highly customised results with details on how to fix mistakes and more importantly, why they are problematic. It gave links to further reading for any areas in which students lost marks. For some exercises, resubmissions were allowed to give the student an instant feedback loop (Submit, learn from results, submit again).

My supervisor and I wrote several publications on how it worked and the effectiveness of it; you can read some of them here:

Computer Science Education: Static analysis of programming exercises: Fairness, usefulness and a method for application. Link:

Practitioner Research in Higher Education: Measuring the impact of high quality instant feedback on learning. Link:

Nutbrown, S., Higgins, C., Beesley, S. (2016) ‘Measuring the impact of high quality instant feedback on learning’, Practitioner Research in Higher Education Journal, Special Assessment issue, 10(1)

Beesley, S., Nutbrown, S. and Higgins, C. 2015. Preconceptions surrounding automated assessment - a study of staff and students. Fifth International Assessment in Higher Education Conference, June 2015.

Beesley, S., Nutbrown, S. and Higgins, C. 2014. A framework for facilitating feedback best practices based on a study of staff and students: in HEA STEM Annual learning and Teaching Conference 2014: Enhancing the STEM student journey.

For this work, as well as helping teach, I received a highly commended PGR Teaching Award in 2015.