Boud, D., & Molloy, E. (2012). Rethinking models of feedback for learning: The challenge of design. Assessment and Evaluation in Higher Education, 38(6), 698-712.
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. DOI:10.1080/02602938.2018.1463354
Goh, R. (2020, February 5). Engaging students in feedback. Retrieved from MOE Singapore intranet website OPAL.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Lipnevich, A. A., Berg, D. A. G., & Smith, J. K. (2017). Toward a model of student response to feedback. In G. T. L. Brown, & L. R. Harris (Eds.), The handbook of human and social conditions in assessment (pp. 169-185). New York: Routledge.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-144.
Tan, K. H. K. (2013). A framework for assessment for learning: Implications for feedback practices within and beyond the gap. ISRN Education, 1-6.
Tan, K. H. K. (2014). Assessment feedback practices for enhancing learning. In W. S. Leong, Y. S. Cheng, & K. H. K. Tan (Eds.), Assessment and learning in schools (pp. 129-140). Singapore, Pearson Education South Asia Pte Ltd.
Tan, K. H. K., & Wong, H. M. (2018). Assessment feedback in primary schools in Singapore and beyond. In J. K. Smith, & A. A. Lipnevich (Eds.), The Cambridge handbook of instructional feedback (pp. 123-144). Cambridge: Cambridge University Press.
Carless, D., & Boud, D. (2018): The development of student feedback literacy: Enabling uptake of feedback, Assessment & Evaluation in Higher Education, DOI:10.1080/02602938.2018.1463354
Student feedback literacy denotes the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies. In this conceptual paper, student responses to feedback are reviewed and a number of barriers to student uptake of feedback are discussed. Four inter-related features are proposed as a framework underpinning students’ feedback literacy: appreciating feedback; making judgments; managing affect; and taking action. Two well-established learning activities, peer feedback and analysing exemplars, are discussed to illustrate how this framework can be operationalized. Some ways in which these two enabling activities can be re-focused more explicitly towards developing students’ feedback literacy are elaborated. Teachers are identified as playing important facilitating roles in promoting student feedback literacy through curriculum design, guidance and coaching. The implications and conclusion summarise recommendations for teaching and set out an agenda for further research.
Editors: Anastasiya Lipnevich and Jeffrey Smith (2018)
This book brings together leading scholars from around the world to provide their most influential thinking on instructional feedback. The chapters range from academic, in-depth reviews of the research on instructional feedback to a case study on how feedback altered the life-course of one author. Furthermore, it features critical subject areas - including mathematics, science, music, and even animal training - and focuses on working at various developmental levels of learners. The affective, non-cognitive aspects of feedback are also targeted; such as how learners react emotionally to receiving feedback. The exploration of the theoretical underpinnings of how feedback changes the course of instruction leads to practical advice on how to give such feedback effectively in a variety of diverse contexts. Anyone interested in researching instructional feedback, or providing it in their class or course, will discover why, when, and where instructional feedback is effective and how best to provide it.
Tan, K., & Wong, H. M. (2018). Assessment feedback in primary schools in Singapore and beyond. In A. Lipnevich & J. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 123-144). Cambridge: Cambridge University Press. doi:10.1017/9781316832134.008
This chapter examined assessment feedback practices in primary schools against the backdrop of an instructional context of preparing young learners for high-stakes examinations. Faced with such pressures, there is a temptation for teachers to focus on imposing corrective feedback on students to address deficiencies, rather than to enable improvement. Such imposition also leaves little room for students to clarify the intent and meaning of feedback, rendering the feedback information as an extra burden or correction to be dealt with in addition to assessment tasks. A trifecta of methods is recommended for assessment feedback to be practiced, rather than merely provided, in a more constructive fashion for primary students to enhance learning. These methods create a framework of clear and appropriate standards, assessment tasks designed to generate and apply feedback, and a practice of using feedback to engage in dialogue with students on understanding and using feedback information for their learning. In addition, we argue that it is vitally important for students to be supported to become active agents and beneficiaries of assessment feedback. A combination of enlightened teachers and empowered students engaged in dialogue on student learning can then ensue.
Munshi, C., & Deneen, C.C. (2018). Technology-Enhanced Feedback. In A. Lipnevich & J. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 335-356). Cambridge: Cambridge University Press. doi:10.1017/9781316832134.017
The increasing use of technology has influenced the enactment of assessment. Technology Enhanced Feedback (TEF) emphasises the role of feedback within assessment processes that leverage on technology. The integration of TEF systems into curricula has resulted in a variety of practices that may be discussed in terms of their success and applicability. Despite the purported merits of TEF systems, sustained adoption is rare. This chapter explores issues in TEF through a systematic review of the literature and make suggestions on ways forward. Some historical context on TEF is first provided. This includes a discussion of the impetus for TEF, how it has progressed into modern use, and several unresolved issues. The chapter then presents the methodology for the review process, focusing on issues of scope, evaluation, and analysis. The results of the review are then discussed in terms of core, emergent themes, current trends and issues in TEF research and practice, as well as their relationships to one another. Key points include the implications of who is driving the dialog around TEF, what is known to work well in TEF, and where there seem to be persistent and significant gaps in research and practice. This chapter concludes with suggestions for researchers and practitioners wishing to explore and utilize TEF, and in doing so move forward the connected fields of research and practice.
Van der Kleij, F., Adie, L., & Cumming, J. (2017). Using video technology to enable student voice in assessment feedback. British Journal of Educational Technology, 48(5), 1092-1105.
Students’ voices have been remarkably absent in feedback research, yet research shows that the way students engage with feedback significantly impacts on its effect on learning. Feedback research has mainly focused on aspects of the feedback message between a sender and receiver, with little consideration of the positioning of students in this process. This article (a) provides an overview of the literature about feedback in education and the role of the student in these processes and (b) provides findings from a pilot project that explored the use of video technology as a self‐reflection tool for six teachers and six students to capture assessment interactions and give students a voice in feedback conversations. The pilot employed iPads to facilitate video‐aided self‐reflection on feedback practices. The results suggest that not only is video a powerful tool for teacher reflection on their feedback practices, it can also provide better understanding of the student perspective in feedback conversations. Importantly, involving students themselves in video‐stimulated recall of feedback conversations has the potential to contribute to students’ self‐reflection of their involvement in the feedback process, encouraging them to make their voices heard and participate in feedback as a dialogic practice.
Denton, P., & McIlroy, D. (2017). Response of students to statement bank feedback: the impact of assessment literacy on performances in summative tasks. Assessment & Evaluation in Higher Education, 1-10.
Efficiency gains arising from the use of electronic marking tools that allow tutors to select comments from a statement bank are well documented, but how students use this type of feedback remains under explored. Natural science students (N = 161) were emailed feedback reports on a spreadsheet assessment that included an invitation to reply placed at different positions. Outcomes suggest that students either read feedback completely, or not at all. Although mean marks for repliers (M = 75.5%, N = 39) and non-repliers (M = 57.2%, N = 68) were significantly different (p < .01), these two groups possessed equivalent attendance records and similar submission rates and performances in a contemporaneous formatively assessed laboratory report. Notably, average marks for a follow-up summative laboratory report, using the same assessment criteria as the formative task, were 10% higher for students who replied to the original invite. It is concluded that the repliers represent a group of assessment literate students, and that statement bank feedback can foster learning: a simple ‘fire’ analogy for feedback is advanced that advocates high-quality information on progress (fuel) and a curricular atmosphere conducive to learning (oxygen). However, only if students are assessment literate (ignition) will feedback illuminate.
O’Donovan, B., Rust, C., & Price, M. (2016). A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in Higher Education, 41(6), 938-949.
It is clear from the literature that feedback is potentially the most powerful and potent part of the assessment cycle when it comes to improving further student learning. However, for some time, there has been a growing amount of research evidence that much feedback practice does not fulfil this potential to influence future student learning because it fails in a host of different ways. This dilemma of the disjuncture between theory and practice has been increasingly highlighted by the UK National Student Survey results. This paper uses a model of the assessment process cycle to frame understandings drawn from the literature, and argues that the problem with much current practice resides largely in a failure to effectively engage students with feedback. The paper goes on to explore how best to effectively engage students with assessment feedback, with evidenced examples of feedback strategies that have successfully overcome this problem.
Gail Crimmins, Gregory Nash, Florin Oprescu, Marama Liebergreen, Janet Turley, Richard Bond & Jeanne Dayton (2016) A written, reflective and dialogic strategy for assessment feedback that can enhance student/teacher relationships, Assessment & Evaluation in Higher Education, 41(1), 141-153.
In response to the shortcomings of current assessment feedback practice, this paper presents the results of a study designed to examine students’ and teachers’ experience of engaging in a written, reflective and dialogic feedback (WRDF) strategy. The strategy was designed to enhance the learning experience of students undertaking a large first-year core course at a regional Australian university in semester 2, 2012. The evaluation consisted of three components: student surveys pre- and post-WRDF; a student focus group post-WRDF; and a teacher survey post-WRDF. Participating students’ and teachers’ perceptions of the WRDF assessment feedback suggested that students value feedback highly, and show a preference for feedback combining written, reflective and dialogic processes. The research findings suggest that the WRDF framework can be utilized to address the immediate, practical problem of students’ and teachers’ dissatisfaction with the practice of assessment feedback. Thus, WRDF may be used to nurture teacher/student relationships and enhance the learning process. Although a relatively intensive process, the WRDF strategy can serve an integral role in enhancing feedback practices and supporting students.
Lipnevich, A., McCallen, L., Miles, K., & Smith, J. (2014). Mind the gap! Students' use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539-559. Retrieved from http://www.jstor.org/stable/43575435
The current study examined efficient modes for providing standardized feedback to improve performance on an assignment for a second year college class involving writing a brief research proposal. Two forms of standardized feedback (detailed rubric and proposal exemplars) were utilized is an experimental design with undergraduate students (N = 100) at three urban college campuses. Students completed a draft of a proposal as part of their course requirements and were then randomly assigned to receive a detailed rubric, proposal exemplars, or a rubric and proposal exemplars for use in revising their work. Analyses of students’ writing from first draft to second draft indicated that all three conditions led to improvements in writing that were significant and strong in terms of effect size, with the stand-alone detailed rubric leading to the greatest improvement. Follow-up focus groups with students indicated that a stand-alone rubric potentially engages greater mindfulness on the part of the student. Practical implications are discussed.
Lisa Murtagh (2014) The motivational paradox of feedback: teacher and student perceptions, The Curriculum Journal, 25:4, 516-541.
The notion that future performance can be affected by information about previous performance is often expressed in terms of ‘closing the gap’. Feedback has long been recognised as a mechanism through which teaching and learning may be influenced. The current wave of support in the United Kingdom for assessment for learning echoes these sentiments. This paper examines the feedback strategies employed by two experienced literacy practitioners in England. Using data gathered from field observations, interviews and documentary sources, the paper presents evidence of espoused practice associated with feedback, demonstrating that whilst teachers may claim that they make effective use of some feedback strategies to support pupils’ learning and motivation, that this is not supported by empirical data. The paper also identifies that whilst some teachers aim to mark every piece of pupils’ written work for perceived motivational benefits; such a strategy can undermine pupils’ intrinsic motivation and lead to a culture of over-dependency, whereby the locus of control with regard to feedback lies solely with the teacher. The paper concludes by exploring some possible implications for practice with regard to the provision of written feedback in particular.
Lipnevich, A. A., McCallen, L., & Smith, J. K. (2013). School leaders’ perspectives on the effectiveness of feedback messages. Assessment Matters, 5, 74-94.
Feedback on students’ written assignments has been deemed critical for improvement. Although teachers’ and students’ views on feedback have been examined, school leaders’ perceptions of what constitutes effective feedback remain unclear. This study investigates school leaders’ perceived quality of feedback that a teacher may provide, with teacher responses formulated based on Hattie and Timperley’s (2007) typology of feedback. We randomly assigned school leaders ( n=103) to five experimental conditions based on Hattie and Timperley’s types of feedback (task-level, process-level, self-regulation-level, person-level/praise, and person-level/criticism), and asked them to rate the quality of the feedback. The results revealed that school leaders rated task-level feedback as most effective, followed by person-level/criticism feedback. Person-level/praise was deemed least effective in improving the quality of students’ writing. Theoretical and practical implications are discussed.
Lipnevich, A. A., & Smith, J. K. (2009). Effects of differential feedback on students’ examination performance. Journal of Experimental Psychology: Applied, 15(4), 319.
The effects of feedback on performance and factors associated with it were examined in a large introductory psychology course. The experiment involved college students (N = 464) working on an essay examination under 3 conditions: no feedback, detailed feedback that was perceived by participants to be provided by the course instructor, and detailed feedback that was perceived by participants to be computer generated. Additionally, these conditions were crossed with factors of grade (receiving a numerical grade or not) and praise (receiving a statement of praise or not). The task under consideration was a single-question essay examination administered at the beginning of the course. Detailed feedback on the essay, specific to individual’s work, was found to be strongly related to student improvement in essay scores, with the influence of grades and praise being more complex. Generally, receipt of a tentative grade depressed performance, although this effect was ameliorated if accompanied by a statement of praise. Overall, detailed, descriptive feedback was found to be most effective when given alone, unaccompanied by grades or praise. It was also found that the perceived source of the feedback (the computer or the instructor) had little impact on the results. These findings are consistent with the research literature showing that descriptive feedback, which conveys information on how one performs the task and details ways to overcome difficulties, is far more effective than evaluative feedback, which simply informs students about how well they did.
Lipnevich, A. A., & Smith, J. K. (2009). "I really need feedback to learn": students’ perspectives on the effectiveness of the differential feedback messages. Educational Assessment, Evaluation and Accountability, 21(4), 347.
The current study examined students’ perceptions of the effects of different forms of instructional feedback on their performance, motivation, and emotion. Forty-nine students attending an eastern US university participated in focus group discussions. The groups explored students’ reactions to grades, praise, and computer versus instructor provided feedback, as well as students’ views of the ideal feedback. Students named detailed comments as the most important and useful form of feedback. Grades were deemed to be unnecessary if the goal of an activity was to learn. Students proposed that low grades elicit negative affect and damage the students’ sense of self-efficacy, and high grades decrease motivation and lessen students’ perceived need to improve. Praise was reported to positively affect emotion, but not to be directly conducive to learning.