School Improvement Through Performance Feedback [1 ed.] 9781134381104, 9789026519338

Internationally there is an increasing trend to publish and feed back information to schools and teachers on their funct

215 102 4MB

English Pages 282 Year 2002

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

School Improvement Through Performance Feedback [1 ed.]
 9781134381104, 9789026519338

Citation preview

SCHOOL IMPROVEMENT THROUGH PERFORMANCE FEEDBACK

CONTEXTS OF LEARNING Classrooms, Schools and Society Managing Editors: Bert Creemers, GION, Groningen, The Netherlands. David Reynolds, School of Education, University of Exeter, Exeter, UK. Sam Stringfield, Center for the Social Organization of Schools, John Hopkins University, USA.

SCHOOL IMPROVEMENT THROUGH PERFORMANCE FEEDBACK

EDITED BY

ADRIE J. VISSCHER University ofTwente, The Netherlands AND

ROBERTCOE University ofDurham, UK

I~ ~~~;~;n~~~up LONDON AND NEW YORK

Library of Congress Cataloging-in-Publication Data

Applied for

Copyright © 2002 Routledge

All rights reserved. No part of this publication or the information contained herein may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying, recording or otherwise, without written prior permission from the publishers.

Although all care is taken to ensure the integrity and quality of this publication and the information herein, no responsibility is assumed by the publishers nor the author for any damage to property or persons as a result of operation or use of this publication and/or the information contained herein.

Published by Routledge Square,Milton Milton Park, Abingdon, Oxon, OXl4 ParkSquare, 22 Park Park, Abingdon, Oxon OX14 4RN 4RN 270Third Madison Ave,New NewYork, York NY NY10017 10016 711 Avenue, Routledge is an imprint of the Taylor & Francis Group, an informa business Transferred to Digital Printing 2008 ISBN I 0: 90-265-1933-8 (hbk) ISBNlO: 0-415-43223-5 (pbk) ISBN13: 978-90-265-1933-8 (hbk) ISBN13: 978-0-415-43223-8 (pbk) ISSN 1384-1181

Publisher's Note The publisher has gone to great lengths to ensure the quality of this reprint but points out that some imperfections in the original may be apparent

Contents xi

Introduction The School Performance Feedback Concept The Origins of School Performance Feedback Systems Structure of the Book References PART 1

Theoretical Introduction

Chapter 1

Evidence on the Role and Impact of Performance Feedback in Schools

1.1 1.1.1 1.1.2 1.1.3 1.1.4 1.2 1.2.1 1.2.2 1.2.3 1.3 1.3.1 1.3.2

xi xii xv xviii

3

Introduction A Complex Picture Towards a Conceptualisation of 'Feedback' Development of Theory Contexts for Performance Relevant Research on Feedback Effects Research on Feedback to Learners Research on Feedback in Organisational Settings Specific Research on Feedback to Teachers/Schools Conclusions and Discussion Summary of Evidence about Feedback Effects Implications for Practice

3 4 5 7 10

References

23

Chapter 2

A Typology of Indicators

27

2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11

Introduction Rationale for a Typology of Indicators A Typology of Indicators for Education Cross-classifications and Links Unit of Analysis - Another Dimension There's Many a Slip Twixt Cup and Lip Time Innovations or Inventions Indicators and Evidence-based Practice and Policies The Need for an 'Evidence-based' Kitemark Indicators are not enough - Experiments are needed, urgently

27 28 29 32 33 33 34 34 35 36 37

References

37

11 11

12 13 17 17 20

Chapter 3 3.1 3.2 3.3

A Framework for Studying School Perfonl1ance Feedback Systems

41

Introduction The Factors that Matter Conclusion

41 42 65

Acknowledgement References

67 68

PART 2

Evidence on School Perfonl1ance Feedback

Chapter 4

The ABC+ Model for School Diagnosis, Feedback, and Improvement

4.1 4.2 4.2.1 4.2.2 4.2.3 4.3 4.3.1 4.3.2 4.3.3 4.3.4 4.4 4.4.1 4.4.2 4.4.3 4.5 4.5.1 4.5.2 4.5.3 4.5.4 4.6 4.7 4.8

Introduction Country (and State) Specific History and Context An Overview of Education Accountability in the USA Education Accountability in Louisiana: Tracking the National Model Achieving Outcomes through a Focus on Process: The ABC+ Model Evolution and Features of The ABC+ Model Development of the ABC+ Model A Description of the ABC+ Diagnostic and Feedback System General Considerations Regarding the Feedback System Assumptions of the ABC+ Model Goals and Descriptions of the Research Process used in Three Applications of the ABC+ Model Goals and Description of the SEAP-process Goals and Description of the SAM Process Goals and Description of the East Baton Rouge Title I Project Features of Three Applications of the ABC+ Model Performance Dimensions Covered Features of the SEAP Analysis Model Features of SAM Features of the EBR Title I Project Case Studies from One Application of the ABC+ Model: the EBR Title I Project Feedback Given, Assistance Offered, and Effects from Three Applications of the ABC+ Model Recommendations Acknowledgement Acronyms Used in the Text of This Chapter References

75 75 76 76 78 80 82 82 83 87 89 90 91 94 97 98 98 98 100 101 102 104 109 110 III 112

Chapter 5

Using School Effectiveness as a Knowledge Base for Self-evaluation in Dutch Schools: the ZEBO-project

5.1 5.2 5.3 5.4 5.4.l 5.4.2 5.4.3 5.5 5.5.1

Introduction The Dutch Context The Development and Rationale of the ZEBO Project ZEBO-PI: Content, Procedures, Feedback and Support Content of ZEBO-PI Procedure Feedback and Support The Use of Feedback Content and Clearness of the School Report, Recognisability of the Results Content and Clearness of the Classroom Report, Recognisability of the Results Validation of Results Dissemination and Discussion of the Results Comparing with Results of Other Evaluations Using ZEBO-PI for Improvement Support Conditions for Successful Self-evaluation Suggestions for Improvement and extending ZEBO-PI Instruments Lessons Learnt and Recommendations for Self-evaluation and School Improvement

5.5.2 5.5.3 5.5.4 5.5.5 5.5.6 5.5.7 5.5.8 5.5.9 5.6

Chapter 6 6.l 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.8.1 6.8.2 6.9 6.l0 6.11

115 115 117 119 122 122 127 127 130 l32 132 l33 l33 134 l34 l35 l35 l36 136

References

141

Jolts and Reactions: Two Decades of Feeding Back Information on Schools' Performance

143

Introduction The Case Study LEAs The Current Performance Frameworks The Development of More Sophisticated Value-added Frameworks for Comparison and Interpretation Challenging Self-Images Types of Support Offered to Schools Characteristics of Usage in Schools Case Studies of Schools Attending to Some Basics Building for a Broader Agenda Cultures for the Use of Information about Performance Direct Impact on Performance Conclusions

143 145 146

Acknowledgements References

161 161

148 149 150 152 153 153 156 157 159 160

Chapter 7 Performance Feedback to Schools of Students' Year 12 Assessments: The VCE Data Project

163

7.1 7.2 7.3 7.3.1 7.3.2 7.3.3 7.3.4 7.4 7.5

Introduction History and Context Project Description, Goals and Parties Involved Basic Premise of Improvement Context and Rationale for the Project 'Value-added' and' Ability' -adjusted Measures Responses to and Management of the Information Lessons Learnt From the Project Concluding Comments References

163 164 167 168 169 171 178 181 182 185

Chapter 8

Performance Indicators in Primary Schools

191

8.1 8.2 8.3 8.4 8.5 8.6 8.6.1 8.6.2 8.6.3 8.6.4 8.6.5 8.6.6 8.6.7 8.7 8.8 8.9 8.10 8.11

Introduction Historical Context Project Goals Project Rationale Historical Development Features of the Project Data Collected Outcome Indicators Performance Standards Used Units and Methods of Analysis Data Collection, Analysis and Distribution Presentation of Feedback Confidentiality Support for Schools The Use of Data by Schools The Relationship between PIPS and its Users Empirical Evidence for a Positive Impact Challenges

191 192 193 194 195 197 197 198 199 199 200 201 206 207 208 210 212 216

References

218

PART 3

Conclusions, Reflections and Recommendations

Chapter 9

Drawing up the Balance Sheet for School Performance Feedback Systems

9.1 9.2 9.2.1

Introduction Application of the Visscher Framework Some General Conclusions on the Comparison with the Framework Discussion Directions for Future Research References

9.3 9.4 Index

221 221 222 242 243 250 253 255

Introduction Robert Coe* & Adrie J. Visscher** * University of Durham, England **University of Twente, The Netherlands

In this introduction to the book the central concept of 'school performance feedback' will be defined first. Thereafter, attention will be paid to when these systems came into existence and with which reasons. Finally, the structure of the book will be presented including information on the content of each of the chapters. The School Performance Feedback Concept

This book is about school performance feedback systems (SPFSs), that is to say, information systems external to schools that provide them with confidential information on their performance and functioning as a basis for school self-evaluation. Such systems have become widespread in education in many parts of the world. They share a goal of seeking to maintain and improve the quality of schools, and arise out of a belief in the power of feedback to learn, and to produce change, often accompanied by a sense of disillusionment at the lack of impact of other models of school improvement. This definition requires some explanation. The need for systems to be external excludes informal, self-generated feedback of the kind that all schools will have, and requires that the feedback be explicitly defined and collected - part of a system, rather than naturally occurring.

xii

ROBERT CaE & ADRIE 1. VISSCHER

The word confidential separates SPFSs from systems of public school performance accountability and for the support of school choice, which have rather different aims. Confidential information might also be provided to local education authorities, school districts, or to governing bodies and school boards, but the focus should be for school self-evaluation rather than for public judgement. It is acknowledged, however, that these three kinds of aims (accountability, school improvement, support of school choice) in practice sometimes are hard to separate and that some systems try to serve as well accountability/school choice on the one hand as school improvement goals on the other, despite the tensions that are inherent between them. The interpretation of the content of information on - the school's performance or functioning - must be taken broadly. School performance here is likely to mean some kind of contextualised measure for fair comparison, adjusted to take account of factors beyond the control of the school. In the context of students' academic achievements this is what has come to be known as 'value added'. However, it is important to note that 'performance' may equally relate to other, non-academic outcomes of schooling (e.g. behavioural and affective), and may also include absolute as well as adjusted performance measures. Information on the 'functioning' of schools here relates to organisational and school process measures like the resources spent, the subject matter taught, the instructional methods used, etc. A final essential component of the definition is that the feedback should provide a basis for self-evaluation. This is a requirement more about its aims than its actual use, since the latter will be known only after the SPFS has been implemented. However, the implication is that the feedback should not simply have the potential to be used for self-assessment, but that such judgements, once they have been made, should lead to some kind of action, e.g. the closer investigation where and why the schools underperforms, and, thereafter, the development of a school improvement policy. The Origins of School Performance Feedback Systems

In schools, as in other organisations, a variety of forms of informal and selfgenerated performance feedback has always existed; the most important ones being student achievement scores. A number of factors seem to have contributed to the growth of more formal school performance feedback systems in many countries over the last twenty or so years. In many western countries in the 1980s and 1990s the rise of a political climate of public sector accountability can be observed. Although education was not the first political sector where the government and taxpayers wanted to have information on 'how their money had been spent' this

INTRODUCTION

xiii

principle soon also made its entrance there. The pressure to evaluate and report on the performance of publicly funded educational institutions in England for example is reflected by the publication of league tables ranking schools according to students' achievements, and by the creation of a formalised inspection regime (Ofsted, the Office for Standards in Education). Although neither of these is a SPFS in the sense defined above, it is arguable that these kinds of initiatives helped to create a climate in which school performance feedback might be seen as more salient than previously. Related to the accountability trend is the trend towards decentralisation in the administration of educational systems. Because of the fact that schools have become more free in making local decisions on what will happen within their organisations, they are more likely to seek the kind of information they can utilise for school quality control, i.e. some sort of SPFS. Publicly available school performance indicators tended to be very global and did not provide a basis for detecting and solving the cause(s) of underperformance. The latter required more detailed information. Moreover, there is some evidence (e.g. Murdoch & Coe, 1997) that schools' perceptions of the unfairness of the public judgements of their effectiveness (cf. Visscher, 2001, for an overview of the drawbacks of public school performance indicators) were often a factor in their choice to implement a confidential value added school monitoring system. The published school performance information included average raw achievement of a school's students which did not adjust for relevant features of the student intake (e.g. the intake achievement levels of a school's student population). Schools wanted more accurate and fairer data on their own performance - among other things, to be sure about their performance and about whether improvement was really needed or not. Value added school performance information could often also be used as a defence to parents and other stakeholders. Another development that may have contributed to the attempt to improve schools by feeding back information to them on how they 'are doing' may have been the progress made in research in the twin fields of school effectiveness and school improvement. The former line of research has resulted in a knowledge base (Scheerens & Bosker, 1997) that can be utilised in developing systems to monitor the quality of schools (see for example the chapter on the ZEBO-project). In several countries researchers saw opportunities to apply the scientific progress that had been made over the years in order to provide high quality school feedback, and for school improvement (e.g. the body of knowledge on how and where schools differ in performance and school processes, how school performance can be assessed accurately, which school characteristics prove to be associated with

xiv

ROBERT COE & AORIE J. VISSCHER

school effectiveness and which features of evaluative data promote their utilisation). The other area, the field of research on school improvement, may have influenced the development of SPFSs too, as scientific activity there showed that educational change initiatives imposed upon schools were often not very successful. If schools themselves were convinced that something needed to be changed, then 'ownership' of the innovation and success were much more probable. Receiving information on how your school is doing in comparison with similar schools may be a powerful way to make you aware - and determined - that something needs to be changed in your organisation. The increase in feeding back performance indicators to schools has also been influenced by the development of multi-level and value-added dataanalysis models which enable the computation of more reliable and valid information on school functioning. The availability of computerised systems for information processing has made a significant contribution to the logistics of school performance feedback (cf. Visscher, Wild & Fung, 2001). Some authors are pretty pessimistic about whether the kind of correlational analysis carried out in school effectiveness research will provide a basis for improving schools that differ in performance and that operate in differing environments. In the perspective of these authors, schools are seen as differing strongly. In Chapter 3 of this book Visscher refers to various school improvement experts (Dalin, 1998; McLaughlin, 1998; Miles, 1998) who stress the local variability of schools, implying that general, centrally developed policies and reform strategies will not lead to educational change in all schools. Those authors think that schools differ so much with respect to their performance levels (and the underlying reasons for them), their innovation capacities and contextual characteristics, that change efforts should take much more account of what is called the 'power of site or place'. Smith (1998) goes a step further. He states that as practitioners know their educational practice best they should state the goals and changes to be worked on and, after extensive training, try to accomplish those. Adaptation to the user-context can then be achieved. A SPFS maya valuable tool within this perspective on school improvement, providing timely, high-quality information on how a school 'is doing' as a basis for practitioner-led improvement actions. That may help practitioners in finding problems in their schools as well as in solving them, before it is too late. An important additional effect may also be that practitioners gain a better insight into how their school works (enlightenment) and which interventions work best in their situation. Related to the pessimism of the school improvement authors is the view of Glass (1979) who regards 'education' as a very complex, highly

xv

INTRODUCTION

uncertain and unpredictable system on which we possess only incomplete knowledge. We should not try to find eternal truths about which of several things works well in particular circumstances, as a basis for planning and manipulating education at a large distance from the teaching-learning process in schools. What should be done is the diligent monitoring of the system while the services are highly decentralised, the actors are flexible, and can choose from options what they consider best (instead of precisely implementing a universal approach that has been developed somewhere at a higher level). Support for gradual, local interventions to improve may also be found in the work of Dahl and Lindblom (1963) who advocate the political theory of pluralism. Although their work focuses not on schools but on societies, it can be translated to the world of schools quite well. In the view of Dahl and Lindblom, instead of goal-consensus among citizens competition between them and the goals each has is a reality. The authors, therefore, hold a plea for defining goals and values in a concrete context instead of on the basis of abstract goals. Those who have to decide simply do not possess enough information and know-how on the system to be controlled to take solid decisions. They recommend working on a trial and error basis: try to solve manageable, short-term problems incrementally by making testable interventions. This would lead to continuous adaptation and, hopefully, improvement and also be more effective than taking big steps forward that usually do not work. Structure of the Book

This book consists of three parts. In the first part (Chapters 1 to 3) school performance feedback is put in perspective by conceptualising it, presenting the evidence we have on how feedback works, by presenting reflections on the indicators that may be fed back, and by presenting a framework with the variables that may influence the usage and effects of SPFSs. In Chapter 1, Robert Coe summarises the evidence from the psychological, organisational and educational literature on the effects of performance feedback. Although little of this evidence comes directly from school contexts it includes valuable information that can be translated to our topic. For example, certain characteristics of the feedback contents, of the way it is given, of the nature of the task about which information is fed back, and of the context in which the feedback arises and is used prove to influence whether the feedback enhances or depresses future performance. In Chapter 2, Carol Fitz-Gibbon presents a typology of school quality indicators - a system for classifying and analysing the types of information that can be collected, and hence fed back to schools. A three-dimensional classification is proposed, with the domains monitored, the timing of data

XVI

ROBERT COE & AORIE J. VISSCHER

collection and the unit of analysis forming its axes. Fitz-Gibbon clearly illustrates the applications of the typology and goes on to warn of the difficulties of interpreting evidence from indicator systems, and the need for experimental studies, to determine the effects of interventions in educational systems. Adrie Visscher thereafter provides a framework for the analysis of the usage of school performance feedback systems and their effects in Chapter 3. The framework recognises the importance of the nature of the environment in which schools operate and which differs between them, but also identifies three main classes of variables that influence the way a SPFS will be used and hence its intended and unintended effects. These classes are the organisational features of the school, the characteristics of the implementation process, and the nature of the SPFS itself (the last of which is in tum influenced by the process of its design). Visscher draws on an enormous range of school improvement, educational management, and other literature and identifies some 35 variables within the classes that he supposes to be important in understanding the usage and effects of SPFSs. The second part (Chapters 4 to 8) of the book contains descriptions and analyses of a series of school performance feedback systems from around the world. Many of them are apparently very successful, in terms of their popularity with schools and administrators (which is not self-evident at all for innovations introduced into educational practice!). The editors asked the authors of these chapters to address the following topics in the analysis of their SPFSs: • The basic idea(s) about what leads to school improvement on which their SPFS-project is based, the project goals and involved parties. • The SPFS-design strategy followed. • The SPFS-features: domains monitored, units of analysis, the procedures for data-collection, -analysis, -dissemination, and the way of information-presentation. • The support schools receive in interpreting and using the SPFSinformation. • Empirical evidence on the usage of fed back information in schools. • The match between the nature of performance feedback and the nature of schools as organisations. • Evidence on the effects of school performance feedback. • The problems experienced in attempting to improve schools VIa performance feedback, and the factors that seem decisive for success. In Chapter 4, Charles Teddlie, Susan Kochan and Dianne Taylor present an approach to school performance feedback that has been developed in

INTRODUCTION

XVII

Louisiana, USA. Their ABC+ Model incorporates school process data into a school accountability system so that it can be used by schools for feedback, diagnosis and improvement. The authors describe the development, implementation and effects of three different applications of the Louisiana model, and conclude with some interesting lessons learnt from that process. Chapter 5 contains an account by Maria Hendriks, Simone Doolaard and Roel Bosker of the ZEBO-project (an acronym for 'self-evaluation in primary education' in Dutch) in the Netherlands, focusing particularly on the measurement of school process indicators within that project. ZEBO is a SPFS designed specifically to support primary schools in quality assurance and is particularly noteworthy for its concern with the psychometric qualities of the school performance measurements. Also of interest is that the school performance indicators measured have been based on the school features that are shown by school effectiveness research to be associated with school effectiveness. John Gray provides an interesting account in Chapter 6 of his involvement in SPFSs in the United Kingdom over a number of years. Drawing on case studies of schools in two Local Education Authorities, he describes how the focus has changed from a concern with understanding school effectiveness through producing national systems of accountability, to developing the use of such systems for school improvement. Alongside this change of focus has been a corresponding change in the kinds of data collected and the models used for their analysis. Gray points to some valuable lessons that have been learnt by the schools and LEAs in the acceptance and use of feedback and in their responses to it. He also identifies factors associated with schools' improvement and some potential pitfalls in the use of performance feedback. In Chapter 7, Ken Rowe, Ross Turner and Kerry Lane describe a project conducted in the state of Victoria, Australia, to promote school improvement through the use of performance feedback using specific, contextualised data to help schools to monitor their own effectiveness. The chapter includes detailed responses from schools to the data, and the overwhelmingly positive nature of these responses is testimony to the quality of the feedback and, just as importantly, the care taken in its presentation to schools. Chapter 8 presents information on the development and characteristics of the Performance Indicators in Primary Schools (PIPS) project in the United Kingdom, by Peter Tymms and Stephen Albone. PIPS is part of a suite of projects from Durham University that are unique in the way they have evolved in response to demand from schools. Of particular interest in this chapter are the lengths to which the authors have gone to systematically evaluate the effects of involvement in the project for the schools, and to apply the same preference for data over opinion in their judgements about

ROBERT COE & ADRIE 1. VISSCHER

XVJlJ

PIPS that they encourage the schools who use it to apply to their judgements about their own effectiveness. Chapter 9 makes up the third and final part of the book, in which Adrie Visscher and Robert Coe marshal and reflect on the evidence about the implementation, use and impact of systems of school performance feedback presented in part 2. The perspectives presented by Coe, Fitz-Gibbon, and Visscher in Chapters 1, 2 and 3 respectively are used to structure this analytical enterprise. The authors of Chapter 9 also formulate recommendations for the design of school performance feedback systems, and the ways in which schools should be supported in using information for school improvement, based on the lessons learnt from what has been tried, and evaluated so far. Finally, the editors formulate directions for future research.

References Dahl, R. & Lindblom, C. (1963). Politics. economics. and welfare: planning and politico-economic systems resolved into basic social processes. New York: Harper. Dalin, P. (1998). Developing the twenty-first century school: a challenge to reformers. In A. Hargreaves, A. Lieberman, M. Fullan & D. Hopkins (eds.), International Handbook of Educational Change (vol. 5, pp.l 059-1 073). DordrechtiBostoni London: Kluwer Academic Publishers. Glass, G.V. (1979). Policy for the unpredictable (uncertainty research and policy). Educational Researcher, October, 12-14. McLaughlin, M.W. (1998). Listening and learning from the field: tales of policy implementation and situated practice. In A. Hargreaves, A. Lieberman, M. Fullan & D. Hopkins (eds.), International Handbook of Educational Change (vol. 5, pp.70-84). DordrechtiBostoniLondon: Kluwer Academic Publishers. Miles, M.B. (1998). Finding Keys to School Change: A 40-year Odyssey. In A. Hargreaves, A. Lieberman, M. Fullan & D. Hopkins (eds.), International Handbook of Educational Change (vol. 5, pp.37-39). DordrechtiBoston/London: Kluwer Academic Publishers. Murdoch, K. and Coe, R. (1997). Working with ALlS: a study of how schools and colleges are using a value added and attitude indicator system. Durham: School of Education, University of Durham, United Kingdom. Scheerens, J. & Bosker, RJ. (1997). The foundations of educational effectiveness. Oxford: Elsevier Science Ltd. Visscher, AJ. (2001, in press). Public school performance indicators: problems and recommendations. Studies in Educational Evaluation.

INTRODUCTION

XIX

Visscher, AJ., Wild, P., & Fung, A. (eds.) (2001). Information Technology in Educational Management; synthesis of experience, research and future perspectives on computer-assisted school information systems. Dordrecht/BostoniLondon: Kluwer Academic Publishers.

This page intentionally left blank

PART! Theoretical Introduction

This page intentionally left blank

1 Evidence on the Role and Impact of Performance Feedback in Schools Robert Coe University of Durham, England

1.1 Introduction

There can be few statements in social science more likely to gain popular agreement than the claim that giving feedback can improve a person's performance on a task - and few topics that have been the subject of more research. The use of performance feedback in schools and other organisations is becoming more widespread every year and often presented as needing no justification. However, the evidence about feedback effects is mixed, complex and not well understood. Research results indicate that feedback can be beneficial to future performance, but it can also do harm. Moreover, the relative lack of evidence derived specifically from school contexts makes it hard to predict confidently what the effects will be in any particular case.

4

ROBERTCOE

This chapter sets out to review what is known about feedback effects, drawing on empirical evidence and theoretical understandings from education, psychology and organisational behaviour. After a brief consideration of the complexity of feedback research, it will attempt to clarify what is meant by 'feedback'. A number of the theories that have been proposed to account for the mechanisms of feedback effects will be examined. The conceptualisation of 'performance feedback' also requires that different kinds of 'performance' be defined, and research from three particular contexts will be examined: feedback in learning, feedback on performance in organisational settings and, most relevant to the present context, school performance feedback. The chapter will then attempt to summarise the evidence about the effects of performance feedback in relation to specific characteristics of the feedback and of the task. Finally, it will consider some of the reasons for the difficulty of making predictions about the impact of feedback, particularly in school contexts. 1.1.1 A Complex Picture Published research on feedback effects is extensive in terms of both its quantity and the length of its history. Much of this research seems to be characterised by an apparent clarity about the benefits of feedback, despite often seeming very unclear about precisely what 'feedback' is or what kinds of 'performance' may be helped by it. For example, Ammons' (1956) influential review was already able to draw on over 50 years of research and concluded: Almost universally, where knowledge of their performance is given to one group and knowledge is effectively withheld or reduced in the case of another group, the former group learns more rapidly, and reaches a higher level of proficiency. (p. 283).

However, many of the studies reviewed by Ammons, and subsequently by others, did in fact contain inconsistencies with this belief, but were regarded as anomalies or ignored (Kluger & DeNisi, 1996). Indeed, the plausible view that feedback generally enhances performance is still prevalent in the literature (e.g. Neubert, 1998). Nevertheless, a closer examination of the evidence reveals a far more complicated picture: feedback is by no means always beneficial in its effects, and identifying the conditions under which it may be expected to improve performance is far from straightforward.

EVIDENCE ON THE ROLE AND IMPACT OF PERFORMANCE FEEDBACK 'N SCHOOLS

5

Probably the most significant step forward in untangling this complexity is Kluger and DeNisi's (1996) meta-analysis. In analysing 131 studies' (607 effects) on the effects of 'Feedback Interventions', Kluger and DeNisi found that although the average effect was moderately positive (weighted mean effect size2 0041), over 38% of the effects were negative and the mode of the distribution of effect sizes was zero. They concluded: F[eedback] I[ntervention]s do not always increase performance and under certain conditions are detrimental to performance. (p. 275).

Similar results have been found in other recent reviews and metaanalyses (e.g. in Bangert-Drowns et aI., 1991; Locke & Latham, 1990, Balcazar et aI., 1985).

1.1.2 Towards a Conceptualisation of 'Feedback' Perhaps inevitably, ideas about precisely what 'feedback' is do not always seem to coincide. It is important, therefore, to clarify exactly what is meant by 'feedback' in this context. Much of the early literature refers not to feedback but to 'Knowledge of Results' (KR) or 'Knowledge of Performance' (KP). These correspond to information about the outcome of the task undertaken, such as performance on a test, the development of a motor skill, compliance with a behavioural injunction, job productivity, etc. Excluded from this definition of feedback, however, would be any information about the process of how one undertook the task, as, for example, the message "you do not use your thumb for typing" (Kluger & DeNisi, 1996, p. 255). Kluger and DeNisi adopt a definition of feedback that is similar to the KRlKP concept, but their focus is more specifically on feedback as an intervention that can be manipulated. Thus, they define Feedback Interventions (PIs) as "actions taken by (an) external agent(s) to provide information regarding some aspect(s) of one's task performance". They therefore exclude 'natural' feedback arising without external intervention,

2

These 131 studies represent just 5% of the 3000 or so reports reviewed. The remainder were excluded for a vanety of methodological inadequacies such as lack of control group, confounding of treatments, lack of outcome measures or very small sample size (i.e.

TIME

en

DOMAINS TO BE MONITORED A* Goals

Groups

B*

Policies

Cognitive, e.g. achievements, beliefs

D

Demographic descriptors, e.g. sex, ethnicity, SES Expenditures, e.g. resources, time and money Flow, e.g. who is taught what for how long: curriculum balance, retention, attendance, allocations

F

Il;

z.....

.... au

>0

..2p:;

ell;

85

Ell;

~

e:::E .... 0

ou

.... E-