Teacher Adaptive Practices: Extending Teacher Adaptability into Classroom Practice [1st ed.] 978-981-13-6857-8;978-981-13-6858-5

This book introduces the construct of teacher adaptive practices, extending existing research on teacher adaptability in

468 55 1MB

English Pages XI, 89 [98] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Teacher Adaptive Practices: Extending Teacher Adaptability into Classroom Practice [1st ed.]
 978-981-13-6857-8;978-981-13-6858-5

Table of contents :
Front Matter ....Pages i-xi
Adaptive Teaching for Students’ Critical and Creative Thinking (Tony Loughland)....Pages 1-8
Literature Review (Tony Loughland)....Pages 9-22
Classroom Observation as Method for Research and Improvement (Tony Loughland)....Pages 23-42
The Relationship of Teacher Adaptability, Self-efficacy and Autonomy to Their Adaptive Practices (Tony Loughland)....Pages 43-59
Teacher Professional Learning Using the Teacher Adaptive Practice Scale (Tony Loughland)....Pages 61-79
Looking Forward: Next Steps for Teacher Adaptive Practice Research (Tony Loughland)....Pages 81-89

Citation preview

SPRINGER BRIEFS IN EDUC ATION

Tony Loughland

Teacher Adaptive Practices Extending Teacher Adaptability into Classroom Practice 123

SpringerBriefs in Education

We are delighted to announce SpringerBriefs in Education, an innovative product type that combines elements of both journals and books. Briefs present concise summaries of cutting-edge research and practical applications in education. Featuring compact volumes of 50 to 125 pages, the SpringerBriefs in Education allow authors to present their ideas and readers to absorb them with a minimal time investment. Briefs are published as part of Springer’s eBook Collection. In addition, Briefs are available for individual print and electronic purchase. SpringerBriefs in Education cover a broad range of educational fields such as: Science Education, Higher Education, Educational Psychology, Assessment & Evaluation, Language Education, Mathematics Education, Educational Technology, Medical Education and Educational Policy. SpringerBriefs typically offer an outlet for: • An introduction to a (sub)field in education summarizing and giving an overview of theories, issues, core concepts and/or key literature in a particular field • A timely report of state-of-the art analytical techniques and instruments in the field of educational research • A presentation of core educational concepts • An overview of a testing and evaluation method • A snapshot of a hot or emerging topic or policy change • An in-depth case study • A literature review • A report/review study of a survey • An elaborated thesis Both solicited and unsolicited manuscripts are considered for publication in the SpringerBriefs in Education series. Potential authors are warmly invited to complete and submit the Briefs Author Proposal form. All projects will be submitted to editorial review by editorial advisors. SpringerBriefs are characterized by expedited production schedules with the aim for publication 8 to 12 weeks after acceptance and fast, global electronic dissemination through our online platform SpringerLink. The standard concise author contracts guarantee that: • an individual ISBN is assigned to each manuscript • each manuscript is copyrighted in the name of the author • the author retains the right to post the pre-publication version on his/her website or that of his/her institution

More information about this series at http://www.springer.com/series/8914

Tony Loughland

Teacher Adaptive Practices Extending Teacher Adaptability into Classroom Practice

123

Tony Loughland School of Education UNSW Sydney Sydney, NSW, Australia Research Centre for Teacher Education Beijing Normal University Beijing, China

ISSN 2211-1921 ISSN 2211-193X (electronic) SpringerBriefs in Education ISBN 978-981-13-6857-8 ISBN 978-981-13-6858-5 (eBook) https://doi.org/10.1007/978-981-13-6858-5 Library of Congress Control Number: 2019932825 © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

Student critical and creative thinking is an important outcome of schooling systems throughout the world. However, there are as many different pedagogies for promoting student critical and creative thinking as there are operational definitions for these contested constructs. Many of the pedagogical models put forward to promote student critical and creative thinking focus on the design of lessons and units of work that become almost like a DNA for criticality and creativity ready for duplication in the classroom. This model overlooks the important role that teacher interactions have on the expression of this DNA in each lesson. This study observed what teachers do in classroom to either promote or sometimes hinder the development of student critical and creative thinking when the opportunity arises. Chapter 1 posits a model of adaptive teaching that conceptualises the interaction of the personal and environmental determinants of adaptive teaching that influence the teaching behaviours that promote student creative and critical thinking. Chapter 2 critically examines the personal, environmental and behavioural determinants of this model of adaptive teaching. Chapter 3 begins with the admission that there are significant reliability and validity threats when classroom observation is used in both educational research and teacher evaluation (Harris 2012). This chapter acknowledges this critique and proposes a third way for classroom observation in teacher improvement. The improvement agenda disciplines the classroom observation and moves it away from pure research or evaluation (judgment of the performance) to help teachers improve their practice. This position is supported by the argument approach to test validation endorsed by the AERA, APA and NCME. Chapter 4 contains the analysis of the data from 278 classroom observations of 71 teachers for its relationship to the teacher self-report constructs of teacher adaptability, teacher self-efficacy and perceived autonomy support. The study found that only teacher adaptability could predict a sub-scale of adaptive practices that potentially promote student critical and creative thinking. This finding signals an important relationship between teacher adaptability and adaptive teaching given that student critical and creative thinking is a valued outcome of schooling.

v

vi

Preface

Chapter 5 proposes a combined classroom observation and learning improvement programme based on the model of adaptive teaching presented in this study. This proposal positions this research programme within the science of learning improvement that values the development of rigorous yet usable measures that have direct application to the improvement of learning conditions for students in classrooms. It does this through a brief review of the principles of effective professional learning before examining what the emerging field of learning implementation and improvement science might add to these principles. It then applies these principles to three proposals for teacher professional learning that employ the teacher adaptive practice scale as a diagnostic and improvement measure. The final chapter flags future directions for research into a model of adaptive teaching that will investigate the relationship between existing and new personal and behavioural determinants. The text was written with the intention of promoting informed scholarly debate in an area of classroom research with much promise but not without its fair share of theoretical and methodological challenges. The text addresses these challenges with openness and humility to invite scholarly critique and debate. It is also the author’s hope that the text will provide a theoretical and practical scaffold for those teachers wishing to refine their practices so that more students get the opportunity to feel the joy and liberation of being critical and creative thinkers in an education system seemingly obsessed with examinations and instrumental outcomes. Sydney, Australia/Beijing, China

Tony Loughland

Reference Harris, D. N. (2012). How do value-added indicators compare to other measures of teacher effectiveness. Carnegie Knowledge Network Brief (5).

Acknowledgements

I am indebted to the generosity of the teachers who participated in the study and the schools and schooling systems that kindly granted permission for the research to take place. The Department of Education in New South Wales, Sydney Catholic Schools and Association of Independent Schools in NSW all supported this research. I also acknowledge my research partner, Penny Vlies, who was with me from the beginning of this odyssey and proved to be a wise, insightful and generous colleague throughout. Penny has a deeper philosophical and practical understanding of adaptive teaching than I will ever hope to achieve.

vii

Contents

. . . . . . . . . .

1 1 3 4 5 5 5 6 6 7

......

9

1 Adaptive Teaching for Students’ Critical and Creative Thinking . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation and Learning Improvement Science . . . . . . . . . . . . . Bandura’s Theory of Social Cognition . . . . . . . . . . . . . . . . . . . . . . . Teacher Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Perceived Autonomy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . Teacher Self-efficacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teacher Adaptive Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of This Research Brief . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Conceptual Model of Adaptive Teaching for Student Critical and Creative Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teachers’ Sense of Adaptability . . . . . . . . . . . . . . . . . . . . . . Teachers’ Sense of Self-efficacy . . . . . . . . . . . . . . . . . . . . . . Perceived Autonomy Support . . . . . . . . . . . . . . . . . . . . . . . Teacher Adaptive Practices . . . . . . . . . . . . . . . . . . . . . . . . . Research Questions and Hypotheses . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

9 11 13 13 15 19 20 20

3 Classroom Observation as Method for Research and Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Epistemological Challenge to Classroom Observation Data An Argument Approach to Instrument Validation . . . . . . . . . . . The Methodological Challenges of Classroom Observation . . . . The High-Stakes Context of Classroom Observation . . . . . . . Test Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relationship to Outcomes of Interest . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

23 23 24 25 25 26 29

ix

x

Contents

Internal Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standardised Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Learning Improvement Paradigm: A Third Way for Classroom Observation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Initial Teacher Adaptive Practice Scale . . . . . . . . . . . . Appendix 2: Teacher Adaptive Practice Scale . . . . . . . . . . . . . . . . . Appendix 3: Adaptive Practice Indicators Mapped to Hattie (2012) and AITSL (2014) Classroom Practice Continuum . . . . . . . . . . . . . Appendix 4: Teacher Adaptive Practices Coding Guide . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Relationship of Teacher Adaptability, Self-efficacy and Autonomy to Their Adaptive Practices . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample and Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-levelling Modelling with Teacher Constructs . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Links Between Teacher Adaptive Practices and Perceived Autonomy Support, Teacher Self-efficacy and Teacher Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Covariate Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations and Future Directions . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Teacher Adaptive Practices Coding Guide . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

... ... ...

29 30 33

... ... ...

34 35 36

... ... ...

37 38 40

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

43 43 45 45 45 47 49 49 50 52

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

53 53 54 55 55 57

. . . .

. . . .

. . . .

61 61 62 65

. . . . .

. . . . .

. . . . .

67 67 69 71 72

...

73

5 Teacher Professional Learning Using the Teacher Adaptive Practice Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effective Professional Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation and Learning Improvement Science . . . . . . . . . . . . Three Proposed Teacher Professional Learning Models for Teacher Adaptive Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peer and Self-evaluation of Teacher Adaptive Practice . . . . . . . . Whole School Focus on Teacher Adaptive Practice . . . . . . . . . . . Enhancing Awareness of Personal and Behavioural Adaptability . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1: Teacher Adaptive Practice Classroom Observation Scoring Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

Appendix 2: Choose Your Own Adventure Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . Reality . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . .

xi

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

6 Looking Forward: Next Steps for Teacher Adaptive Practice Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examining Teacher Effectiveness in Entrepreneurial Education and Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Personal Determinants of Adaptive Teaching . . . . . . . . . . . . . . . Behavioural Determinants of Adaptive Teaching . . . . . . . . . . . . Thoughts on Future Study Methodology . . . . . . . . . . . . . . . . . . Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some Final Thoughts on the Translation of Adaptability into Teacher Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

75 75 75 77

..... .....

81 81

. . . . . . .

. . . . . . .

82 83 84 86 86 87 88

..... .....

88 89

. . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

Chapter 1

Adaptive Teaching for Students’ Critical and Creative Thinking

Abstract Students’ creative and critical thinking is a key outcome of interest as schooling systems in OECD countries position the so-called twenty-first-century learning skills as fundamental objectives of their educational endeavours. This policy shift has piqued interest in pedagogical models that promote student creative and critical thinking of which there are many. Most of these existing models focus on curriculum design rather than delivery. Adaptive teaching is a notable exception. This study posits a model of adaptive teaching that conceptualises the interaction of the personal and environmental determinants of adaptive teaching that influence the teaching behaviours that promote student creative and critical thinking. These teaching behaviours are labelled teacher adaptive practices in this study. Keyword Adaptive Teaching · Student Critical and Creative Thinking · Teacher Adaptive Practices

Introduction There has been a turn in leading global education systems towards learning-centred, interactive teaching with a focus on student critical and creative thinking. This focus is important to the East Asian countries that are not content with producing graduates who only rate highly on international tests in literacy and numeracy. China, Korea, Singapore and Hong Kong want to graduate students who are creative, entrepreneurial and proficient in the twenty-first-century skills of creativity, communication, collaboration and higher-order thinking skills (Zhao, 2015). The value ascribed to student critical and creative thinking can be discerned by the effort education jurisdictions which have invested in its measurement. The OECD funded a large study that produced a creativity wheel as an outcome. This wheel explicated the cognitive, intrapersonal and interpersonal dispositions and skills associated with student creativity (Lucas, Claxton, & Spencer, 2013). The Australian government has identified student critical and creative thinking as one of six capabilities in the national curriculum and has a six-level continuum with which to track

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_1

1

2

1 Adaptive Teaching for Students’ Critical and Creative Thinking

student developmental progression in this area (Australian Curriculum Assessment and Reporting Authority, 2018). Finally, the Victorian state government have set the ambitious target of 25% of all students reaching the top level of this progression by 2025 (Victoria State Government, 2018). The creativity wheel and the construction of critical and creative and thinking as a student capability imply that these types of thinking can be learned as a generalizable trait or disposition. In fact, there is little research evidence to support this position (Baer, 2019) and this chapter assumes that student critical and creative thinking are domain specific. Accordingly, each teacher in each subject area needs to seize upon opportunities in lessons to promote student critical and creative thinking. This is the raison d’etre for the development of the teacher adaptive practice scale for classroom observation. There are already a plethora of curriculum and pedagogical frameworks that position student critical and creative thinking as a key goal. The limitation of these frameworks is that they guide the design of learning but not necessarily teacher behaviours in the classroom. The commensurate surge in the measurement of critical and creative thinking in students demands that some research effort be made to investigate if there are a repertoire of teaching behaviours that promote student critical and creative thinking in the classroom. The nascent construct of adaptive teaching has potential as a possible source of this repertoire. The 2016 American Educational Research Association Handbook on Research on Teaching acknowledged “that teaching as an interpretive, situated act requires adaptability and judgment” (Gitomer & Bell, 2016, p. 9). This conception values the teacher who “checks in and changes their practices”. The “checking in” aspect relates to the broader concept of “professional noticing” in teaching: “only teachers armed with deep understanding of their discipline and how to engineer the teaching of it in comprehensible ways are capable of the in the moment formative assessment and expert noticing that is prerequisite for adaptive teaching” (Gibson & Ross, 2016, p. 182). The change in a teacher’s practice that occurs because of them “checking in” is adaptive teaching. This teacher responsiveness to students has been known by many different terms over the last 40 years. A recent review named nine synonyms: “responsive teaching, the teachable moment, improvisation, innovative behaviour, decision making, reflective teaching, adaptive metacognition, adaptive expertise and dialogic teaching” (Parsons et al., 2017, p. 3). The review itself uses the term adaptive teaching. There is a well-established literature on adaptive teaching (Parsons & Vaughn, 2016). Extensive classroom-based research in this area has resulted in codes that signify teacher adaptive responses. There are seven codes for teacher adaptations: (1) introduces new content; (2) inserts a new activity; (3) omits a planned activity, (4) provides a resource or example; (5) models a skill or inserts a mini lesson; (6) suggests a different perspective to students; and (7) pulls a small group, conducts an individual conference, or changes grouping structure (Vaughn, Parsons, Burrowbridge, Weesner, & Taylor, 2016, p. 261). All these adaptive teacher behaviours occur

Introduction

3

in response to the stimuli of student learning, motivation and behaviours (Parsons et al., 2017). The focus of this study is on adaptive teaching that occurs in response to the stimuli of student learning with an emphasis on the promotion of critical and creative thinking in students. The definition of the adaptive teacher in this study is one who notices students, responds appropriately to what they notice and fosters critical and creative thinking in students. This definition of the adaptive teacher is represented by the items on the teacher adaptive practice scale that is positioned as an improvement measure designed to assist in teacher professional learning. This improvement measure aligns neatly with leading education systems that focus as much on the science of the implementation of educational reforms as the innovation of the reform itself.

Implementation and Learning Improvement Science The two nascent research fields of implementation and learning improvement science provide the theory of action on which this study is based. Both fields are very similar in their objective to close the feedback loop between research evidence and practitioners in the field. The achievement of this objective is enhanced when measures are designed to generate research data as well as be deployed in the field for the improvement of practice. These improvement or pragmatic measures are urgently required in the rapidly evolving area of teaching to promote student critical and creative thinking. Implementation science in education “involves careful policy choices, the rigorous and relentless embedding of those policies and the ability to continually evaluate, refine, and change” (Harris, Jones, Adams, Perera, & Sharma, 2014, p. 886). This capacity to rigorously evaluate educational innovation has been identified with Hong Kong and Singapore that have two of the world’s top performing education systems (Harris et al., 2014). Implementation science is also known as learning improvement science in the USA and is defined as: …research carried out through networked communities that seeks to accelerate learning about the complex phenomena that generate unsatisfactory outcomes. This research activity forms around an integrated set of principles, methods, organizational norms, and structures. It constitutes a coherent set of ideas as to how practical inquiries should be thought about and carried out. (Bryk, 2015, p. 474)

The distinction made in learning improvement science between research and improvement measures is a critical one to understand when reading this book. Researchers need to expend many resources to design an instrument that satisfies the rigorous psychometric criteria for validation evidence (AERA, APA, & NCME, 2014). In contrast, learning improvement science endeavours like this study aim for smaller, agile measures that have predictive validity rather than construct validity (Bryk, Gomez, Grunow, & LeMahieu, 2015). In implementation science, improvement measures are known as pragmatic measures that are developed to “meet the assessment needs of service providers rather

4

1 Adaptive Teaching for Students’ Critical and Creative Thinking

than of researchers. Their core characteristic is a high level of feasibility in real world settings”(Albers & Pattuwage, 2017, p. 21). This feasibility is also reflected in the first two principles of learning improvement science, “wherever possible, learn quickly and cheaply; be minimally intrusive—some changes will fail, and we want to limit negative consequences on individuals’ time and personal lives” (Bryk et al., 2015, p. 120). The consideration of feasibility introduces a cost-benefit analysis to teacher professional learning that is rarely acknowledged in the literature but is a crucial factor in education systems where budgets and time schedules are always tight. Learning improvement science also makes a useful distinction between lead and lag measures. A lead measure “predicts the ultimate outcome of interest but is available on a more immediate basis”, whereas a lag measure “is available only well after an intervention has been initiated” (Bryk et al., 2015, p. 200). The question of validity is pertinent when prominent international lag measures in education (PISA, TIMMS, PIRLS) and national lag measures (NAPLAN) are sometimes invalidly claimed to be predictive when they are just snapshots of student achievement at a point in time (Sahlberg, 2014). In contrast, schools have an existing strength in the use of predictive lead measures with their ongoing assessment of students. The objective of implementation science is to integrate the ongoing assessment of students with the continuous evaluation of teacher professional learning interventions that are happening in the school at the time. This study set out to develop a learning improvement measure focusing on teacher behaviours that promote critical and creative thinking in their students. This measure of teacher adaptive practices has its theoretical foundation in Bandura’s theory of social cognition and the constructs of teacher adaptability, perceived autonomy support and teacher self-efficacy.

Bandura’s Theory of Social Cognition This study is informed by the concept of triadic reciprocal causation from social cognition theory that describes the reciprocal interaction between a person and their environment through their cognition, affect and behaviours (Bandura, 1997). The value of this theory for this study lies in its recognition of the interaction of multiple factors on a teacher’s motivation, disposition and behaviours. In this study, a teacher’s adaptability and their sense of self-efficacy are the personal factors whilst perceived autonomy support is the environmental factor. Teacher adaptive practices are the potential behavioural expressions of adaptive teaching that results when there is positive interaction between the personal and environmental factors. This interaction results in teaching behaviours that can respond to the uncertainty, change and novelty of classroom environments where student critical and creative thinking is occurring. The key factors in this conceptual model of adaptive teaching are defined in the following section of this chapter.

Bandura’s Theory of Social Cognition

5

Teacher Adaptability Adaptability is an important disposition for teachers as response to change, novelty and uncertainty is central to their daily work. Teacher adaptability is an emerging construct in research on teacher classroom behaviours with evidence of correlation to improved outcomes for both teachers and students (Collie & Martin, 2016, 2017). Existing research on teacher adaptability defines three methods by which it might be assessed. These are surveys, interview/focus group questions and classroom observation (Collie & Martin, 2016). Despite such impressive early credentials, teacher adaptability to date has only been measured by self-report scales (Collie & Martin, 2016, 2017). Therefore, a classroom observation instrument would potentially make a significant contribution to the evidence linking the disposition of teacher adaptability to teacher adaptive practices in the classroom.

Perceived Autonomy Support Perceived autonomy support (PAS) positively predicts teacher adaptability. Collie and Martin (2017) found that when a teacher perceived that their principal was supportive, they reported that they were more adaptable in the classroom. The same study demonstrated positive relationships between a teacher’s PAS and their sense of well-being and organisational commitment (Collie & Martin, 2017). This relationship between PAS and teacher adaptability warrants further investigation in the conceptual model of adaptive teaching in this study with the addition of another personal teacher variable in teacher self-efficacy.

Teacher Self-efficacy Teacher self-efficacy (TSE) has been shown to have links to outcomes of interest for teachers such as reduced stress and intrinsic needs satisfaction (Klassen & Tze, 2014) as well as adaptive teaching (Schiefele & Schaffner, 2015). As a characteristic of motivation rather than personality, it is also amenable to professional learning interventions (Klassen & Tze, 2014). TSE has been demonstrated to have strong links with many other self-reported teacher outcomes of interest such as continued engagement with professional learning (Durksen, Klassen, & Daniels, 2017), job satisfaction and lower levels of stress (Klassen & Tze, 2014), but there is a paucity of evidence that links TSE to external measures of adaptive teaching such as student achievement and observations of teacher performance (Klassen & Tze, 2014; Schiefele & Schaffner, 2015). The conceptual model outlined in this chapter links teacher self-efficacy to the observation of teacher performance using the teacher adaptive practice scale (Loughland & Vlies, 2016).

6

1 Adaptive Teaching for Students’ Critical and Creative Thinking

Teacher Adaptive Practices Teacher adaptive practices are positioned as the potential behavioural expressions of the model of adaptive teaching conceptualised for this study. This model relies on the positive interaction between the triad of personal, environmental and behavioural aspects of the model. The assumption of the model is that teachers who can cope with change, novelty and uncertainty in the classroom may provide the conditions in which student critical and creative thinking will flourish.

Overview of This Research Brief There are five more chapters in this research brief. The literature review, methodology, results, implications for professional learning and future directions for research are the respective foci of these chapters. A conceptual model for adaptive teaching based on student critical and creative thinking is the focus of chapter two. A review of the broader project of adaptive teaching is undertaken before a conceptual model of adaptive teaching based on student critical and creative thinking is constructed. This leads into a review of the personal and environmental determinants of adaptive teaching outlined in the conceptual model. A critical review of the methodology of classroom observation is the focus of chapter three. The test standards (AERA et al., 2014) and two established and credible classroom observation instruments in CLASS (Pianta, Hamre, & Mintz, 2012) and the Framework for Teaching (Danielson, 2013) are used to examine the validation evidence of the improvement measure of teacher adaptive practices developed in this study (Loughland & Vlies, 2016). The results of the study are presented in chapter four along with a detailed discussion of the methods used to obtain these results. A discussion of the findings in relation to the research hypotheses is presented before the limitations of the study are acknowledged. The application of the teacher adaptive practice scale to teacher professional learning is the focus of chapter five. An examination of the principles of effective teacher professional learning is undertaken in the first half of the chapter. Three proposals for the use of the scale as an improvement measure are then presented. Finally, future directions for research in this area are explored in chapter six using the conceptual model of adaptive teaching. Entrepreneurship education and training (EET) is positioned as one option where the model of adaptive teaching can be further tested. The inclusion of the personal determinants of epistemic cognition and mindset are then posited as two more future directions for this research programme. Finally, the methodological tools of situated judgement tests and student evaluation of teachers are suggested in this chapter as another future direction that would improve the rigour of the nascent model of adaptive teaching.

References

7

References AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington D.C.: AERA. Albers, B., & Pattuwage, L. (2017). Implementation in education: Findings from a Scoping Review. Retrieved from Melbourne: http://www.ceiglobal.org/application/files/2514/9793/4848/Albersand-Pattuwage-2017-Implementation-in-Education.pdf. Australian Curriculum Assessment and Reporting Authority. (2018). Learning continuum of critical and creative thinking. Sydney: ACARA. Baer, J. (2019). Theory in creativity research: The pernicious impact of domain generality. In C. A. Mullen (Ed.), Creativity under duress in education?. Switzerland: Springer. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Bryk, A. S. (2015). 2014 AERA distinguished lecture accelerating how we learn to improve. Educational Researcher, 44(9), 467–477. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Graduate Education School: Harvard Education Press. Collie, R. J., & Martin, A. J. (2016). Adaptability: An important capacity for effective teachers. Educational Practice and Theory, 38(1), 27–39. Collie, R. J., & Martin, A. J. (2017). Teachers’ sense of adaptability: Examining links with perceived autonomy support, teachers’ psychological functioning, and students’ numeracy achievement. Learning and Individual Differences, 55, 29–39. https://doi.org/10.1016/j.lindif.2017.03.003. Danielson, C. (2013). The framework for teacher evaluation instrument (2013th ed.). Princeton, NJ: The Danileson Group. Durksen, T. L., Klassen, R. M., & Daniels, L. M. (2017). Motivation and collaboration: The keys to a developmental framework for teachers’ professional learning. Teaching and Teacher Education, 67, 53–66. https://doi.org/10.1016/j.tate.2017.05.011. Gibson, S. A., & Ross, P. (2016). Teachers’ professional noticing. Theory Into Practice, 180–188. https://doi.org/10.1080/00405841.2016.1173996. Gitomer, D. H., & Bell, C. A. (2016). Introduction. In D. H. Gitomer & C. A. Bell (Eds.), Handbook of research on teaching (5th ed.). Washington, D.C.: AERA. Harris, A., Jones, M. S., Adams, D., Perera, C. J., & Sharma, S. (2014). High-performing education systems in Asia: Leadership art meets implementation science. The Asia-Pacific Education Researcher, 23(4), 861–869. https://doi.org/10.1007/s40299-014-0209-y. Klassen, R. M., & Tze, V. M. C. (2014). Teachers’ self-efficacy, personality, and teaching effectiveness: A meta-analysis. Educational Research Review, 12, 59–76. https://doi.org/10.1016/j. edurev.2014.06.001. Loughland, T., & Vlies, P. (2016). The validation of a classroom observation instrument based on the construct of teacher adaptive practice. The Educational and Developmental Psychologist, 33(2), 163–177. https://doi.org/10.1017/edp.2016.18. Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in schools: First steps towards new forms of formative assessments. OECD: OECD publishing. https://doi.org/10. 1787/5k4dp59msdwk-en. Parsons, S. A., & Vaughn, M. (2016). Toward adaptability: Where to from here? Theory into Practice, 55(3), 267–274. https://doi.org/10.1080/00405841.2016.1173998. Parsons, S. A., Vaughn, M., Scales, R. Q., Gallagher, M. A., Parsons, A. W., Davis, S. G., … Allen, M. (2017). Teachers’ instructional adaptations: A research synthesis. Review of Educational Research, 0(0), 0034654317743198. https://doi.org/10.3102/0034654317743198. Pianta, R. C., Hamre, B. K., & Mintz, S. (2012). Classroom assessment scoring system: Secondary manual. Teachstone: Curry School of Education University of Virginia. Sahlberg, P. (2014). Finnish lessons 2.0. What can the world learn from educational change in Finland? New York: Teachers College Press.

8

1 Adaptive Teaching for Students’ Critical and Creative Thinking

Schiefele, U., & Schaffner, E. (2015). Teacher interests, mastery goals, and self-efficacy as predictors of instructional practices and student motivation. Contemporary Educational Psychology, 42, 159–171. https://doi.org/10.1016/j.cedpsych.2015.06.005. Vaughn, M., Parsons, S. A., Burrowbridge, S. C., Weesner, J., & Taylor, L. (2016). In their own words: Teachers’ reflections on adaptability. Theory into Practice, 259–266. https://doi.org/10. 1080/00405841.2016.1173993. Victoria State Government. (2018). Education state ambition: Learning for life. Melbourne: Victoria Government Retrieved from http://www.education.vic.gov.au/Documents/about/educationstate/ EducationState_LearningForLife.pdf. Zhao, Y. (2015). Lessons that matter: What should we learn from Asia’s school systems? Retrieved from Melbourne: http://www.mitchellinstitute.org.au/.

Chapter 2

Literature Review

Abstract The first chapter argued for the educational significance of the emerging construct of adaptive teaching for student critical and creative thinking. A conceptual model of teacher adaptive practices for student critical and creative thinking is presented at the beginning of this chapter. Next, the research constructs that constitute the personal, environmental and behavioural determinants of this model are critically examined. The outcome of this review is the hypothesis and research questions for this study. Keywords Adaptive Teaching · Triadic Reciprocal Causation · Determinants of Teacher Adaptive Practices

A Conceptual Model of Adaptive Teaching for Student Critical and Creative Thinking A conceptual model of teacher adaptive practices for student critical and creative thinking is proposed for this study. This model assumes that adaptive teaching will provide the necessary classroom conditions for student critical and creative thinking. It draws upon social cognition theory to map the relationship between the teacher as self and the school/classroom as an environment that would positively predict teacher adaptive practices that promote student critical and creative thinking. This study is informed by the concept of triadic reciprocal causation from social cognition theory (Bandura, 1997). Triadic reciprocal causation depicts the reciprocal relationship between people and their environment: “people respond cognitively, emotionally, and behaviorally to environmental events. Also, through cognition people can exercise control over their own behaviour, which then influences not only the environment but also their cognitive, emotional, and biological states” (Maddux & Gosselin, 2012, p. 199). Triadic reciprocal causation would seem to have potential as an explanatory framework to examine a teacher’s behavioural response to the stimuli of student learning as explicated in the research model for this study (Fig. 1). The aim of the current study was to measure the relationship of the personal, environmental and behavioural determinants of adaptive teaching so that a teacher © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_2

9

10

2 Literature Review

Teacher Adaptability

Teacher Self Efficacy

Teacher Adaptive Practices

Student Critical & Creative Thinking

Perceived Autonomy Support

Fig. 1 Proposed model under examination

adaptive practice scale could be developed for use as a teacher improvement tool. The model under examination in this study proposes that teachers who have a positive sense of their self-efficacy, autonomy and adaptability will be able to be adaptive in novel, uncertain and changing classroom environments characterised by students engaged in critical and creative thinking. The dependent variable under investigation as a behavioural expression of adaptive teaching in this study is teacher adaptive practice. The personal and environmental determinants of teacher adaptability, self-efficacy, perceived autonomy support are validated research constructs. They have previously been linked to outcomes of interest related to adaptive teaching. However, often a self-rating measure has been used as both predictor and criterion in these studies (Klassen & Tze, 2014). The methodological challenge of gathering valid and reliable classroom observation data on both students and teachers has provided a significant obstacle to resolving the lacunae that exists between the validated personal and environmental determinants of teacher adaptability and the expression of these dispositions as adaptive teaching in the classroom. The resource-intensive and high-risk nature of such investigations, however, is not reason enough for these studies to be avoided. The design of this study sought to ameliorate the resourcing and risk factors by focusing on just one dependent variable in teacher adaptive practices. This is a proxy measure that will require further validation with the measurement and/or observation of student critical and creative thinking in future studies. The validated and nascent research constructs of the model under investigation in this study are critically reviewed in the next section of the chapter. This review begins with the personal constructs of a teachers’ sense of their adaptability and self-efficacy followed by the environmental construct of perceived autonomy support. Next, the

A Conceptual Model of Adaptive Teaching for Student Critical …

11

constructs that informed the development of the teacher adaptive practice scale are reviewed in the following section of the chapter. Finally, the research hypothesis and questions are presented in the last section.

Teachers’ Sense of Adaptability A teachers’ sense of adaptability has been demonstrated to be linked with positive outcomes both for teachers and the students they teach. Teacher adaptability refers to a domain-specific trait that has been developed from a larger body of research on the general personality trait of adaptability. Adaptability is already a buzz word in business: Ability of an entity or organism to alter itself or its responses to the changed circumstances or environment. Adaptability shows the ability to learn from experience and improves the fitness of the learner as a competitor. (Webfinance Inc., 2018)

Education has not been immune from the adaptability trend. It has been recognised as an emerging area of educational research in the introduction to the 2016 AERA handbook of research on teaching: Conceptualizations of adaptability and judgment vary even more widely than is suggested by the social studies and philosophy chapters in this volume. But across that variation it is clear that teaching as an interpretive, situated act requires adaptability and judgment. (Gitomer & Bell, 2016, p. 5)

Adaptability as an operational concept and construct in education, as the quotation alludes to, is still being shaped by its formative experiences in the field. It has been used in the context of adaptive education systems at the school, district, regional and state level (Goss, 2017). Adaptability has also been the focus of research as a psychological trait evident in students and teachers (Martin, 2017). Adaptability as a trait has been conceptualised and developed as a construct within the theoretical framework of the lifespan theory of control (Martin, 2012). Adaptability is different from other concepts in this theoretical family like resilience because it implies a proactive regulation rather than a “focus on surviving, ‘getting through’ and ‘getting by’ (Martin, 2012, p. 90). This distinction has been further clarified in a recent debate in the literature on the precise definition of adaptability. A proposed diversified portfolio model of adaptability (Chandra & Leong, 2016) was criticised for its conflation of adaptation and resilience (Martin, 2017). This debate signalled that adaptability is maturing as a research construct with its proponents able to defend and define it with empirical and rational warrants. The formal definition of adaptability is “the capacity to adaptively regulate cognition, emotion, and behaviour in response to new, changing, and/or uncertain conditions and circumstances” (Martin, 2012, p. 90). This regulation is applied to the three core domains of cognitive, behavioural and affective functioning (Martin, 2017). Cognitive adaptability refers to an individual’s capacity to adjust their thinking to

12

2 Literature Review

constructively deal with change, novelty and uncertainty. Affective adaptability refers to the modification of emotions in response to environmental change, uncertainty, and novelty. Behavioural adaptability refers to an individual’s ability to problem solve and act in response to these changes to enhance personal and/or group outcomes. There is a slightly modified definition of adaptability for the academic domain: “academic adaptability” reflects regulatory responses to academic novelty, change, and uncertainty that lead to enhanced learning outcomes” (Martin, 2012, p. 90). There is a growing body of evidence linking outcomes of interest to the tripartite adaptability scale across the three areas of cognitive, behavioural and affective adaptability. This research has been undertaken with adolescents and young adults who are living through a time of significant novelty, change and uncertainty. A study of 969 adolescents across nine high schools found that personality and intrinsic theories of motivation predicted adaptability and were associated with positive academic and non-academic outcomes (Martin, Nejad, Colmar, & Liem, 2013). Another analysis on the same sample of adolescents found adaptability predicted a greater sense of control which led to a reduction in failure dynamics (Martin, Nejad, Colmar, Liem, & Collie, 2015). A study of 2050 Australian adolescents found positive direct, indirect effects of their sense of adaptability on their pro-environmental attitudes (Liem & Martin, 2015). Finally, an investigation of the adaptability and behavioural engagement of 186 first-year undergraduate students reported that adaptability was associated with positive behavioural engagement and lower negative behavioural engagement. In turn, negative behavioural engagement was associated with lower academic achievement in the first and second semester of their first year. This body of research has established the validity of the tripartite adaptability scale for the general domain. A branch of this research programme has also investigated the validity of teacher adaptability with respect to its relationship with an important teacher and student outcomes of interest. The research construct of teacher adaptability has been investigated through a study of the self-reported adaptability of 115 teachers and its relationship to their self-reported perceptions of autonomy support from their principal, their sense of well-being and commitment to their organisation. There was also an external measure of student numeracy achievement sourced from their 1685 students. The study was able to report that adaptive teachers report a higher sense of well-being and greater organisational commitment (Collie & Martin, 2017). Teacher adaptability was also indirectly linked with students’ numeracy achievement via teachers’ wellbeing (Collie & Martin, 2017). The evidence of teacher adaptability in this study warrants investigation using another external measure such as observed teacher performance. Other aspects of Collie and Martin’s (2017) teacher adaptability investigation are important to this study. The rationale for the study of the domain-specific area of teacher adaptability is grounded in the context of a teacher’s daily work. This workplace is characterised by continually changing demands of students and timetables, the novelty of new curricula and new students and the constant uncertainty of what might happen next. Adaptability would seem to be a necessary personality trait for someone who aspires to teach.

A Conceptual Model of Adaptive Teaching for Student Critical …

13

Teachers’ Sense of Self-efficacy Teacher self-efficacy research can trace its origins to two sources. Interest was first piqued in the construct with data gathered through the inclusion of two questions on teacher self-efficacy in a 1976 RAND study of reading programmes and interventions (Tschannen-Moran, Hoy, & Hoy, 1998). The second source resides in Bandura’s theoretical development of the construct as part of the lifespan theory of control. Both of these sources were employed by Tschannen-Moran and Hoy (2001) to develop the validated research instrument that is in use today. The two questions included in the 1976 RAND survey were included as an afterthought but yielded significant findings (Tschannen-Moran et al., 1998). The RAND researchers attribute the inclusion of the teacher efficacy questions to their reading of Rotter’s (1966) paper on internal and external sources of reinforcement: “When it comes right down to it, a teacher really can’t do much because most of a student’s motivation and performance depends on his or her home environment” and “If I really try hard, I can get through to even the most difficult or unmotivated students” (Tschannen-Moran et al., 1998, p. 204). The RAND study found a strong relationship between a teacher’s internal locus of control and reading achievement of minority students (Tschannen-Moran et al., 1998). This finding was the catalyst for the inclusion of teacher self-efficacy in subsequent studies of effective teaching. Teacher self-efficacy can trace the other half of its intellectual history as a personal motivational construct to the lifespan theory of control from social cognition. Here, it is defined as the ability to “organize and execute the courses of action required to produce given attainments” (Bandura, 1997, p. 3). Bandura is credited with the theoretical work that distinguished perceived self-efficacy from Rotter’s locus of control (Tschannen-Moran et al., 1998). An individual might attribute the outcome of an action to an internal source but may not have the sense of efficacy to achieve that outcome (Tschannen-Moran et al., 1998). The theoretical and empirical foundation of Bandura’s work was used by Tschannen-Moran and colleagues to develop the teacher self-efficacy scale (Tschannen-Moran & Hoy, 2001; Tschannen-Moran et al., 1998). This scale measures a teacher’s perceived self-efficacy in the three areas of instruction, management and engagement (Tschannen-Moran & Hoy, 2001) and has become the standard instrument to measure this construct.

Perceived Autonomy Support Perceived autonomy support (PAS) has an interesting intellectual history, emerging from self-determination theory to become a valid and reliable measure of workplace culture. Self-determination theory has a longer intellectual history that can be traced back to Maslow’s hierarchy of needs (Maslow, 1943). The PAS has been used as a social context variable influencing teacher’s motivation and sense of well-being

14

2 Literature Review

in the workplace. It is of interest to this study as a possible predictor of teachers’ willingness to adapt their practices in the classroom. Self-determination theory has been instrumental in the definition of intrinsic needs satisfaction. An earlier definition of needs included wants, desires and motives, but self-determination theory excluded desires as it was argued that some desires lead to negative psychological outcomes (Baard, Deci, & Ryan, 2004). Instead, selfdetermination is driven by a need for an individual to feel autonomy, competence and relatedness (Baard et al., 2004). This theory is easily translatable into the workplace where it has been investigated in relation to worker’s workplace engagement, motivation and performance. Autonomy is represented by both an individual’s perception of their autonomy and their perception of the autonomy support they receive from their manager or supervisor. A positive sense of personal autonomy is characterised by employees (Baard et al., 2004) who feel more self-determining, more competent and more able to relate to their supervisors and colleagues. It is measured by the General Causality Orientation Scale (Williams & Deci, 1996). Autonomy support “involves the supervisor understanding and acknowledging the subordinate’s perspective, providing meaningful information in a non-manipulative manner, offering opportunities for choice, and encouraging self-initiation” (Baard et al., 2004, p. 2046). PAS has been measured in the past by the Problem at Work (PAW) questionnaire (Baard et al., 2004), but the Workplace Climate Questionnaire (WCQ) that was used in a study of medical students (Williams & Deci, 1996) is now the preferred option given the validation evidence regarding both scales helpfully provided by Baard et al. (2004). The General Causality Orientation Scale and the Workplace Climate Questionnaire used together were shown to be a positive predictor of improved sense of competence and enhanced psychosocial skills in a fascinating 30-month longitudinal study of medical students (Williams & Deci, 1996). They were combined again in a study of bank employees that found a positive relationship between both measures and work performance (Baard et al., 2004). This study enhanced the available validation evidence for both instruments with PAS being established as a valid social context variable for studies of the relationship between employee’s intrinsic needs satisfaction and their engagement and performance at work (Baard et al., 2004). This brought the PAS into the frame for investigations of teacher’s psychosocial health and their workplace engagement and performance. In the workplace of a school, autonomy support for a teacher may involve their principal, supervisor or mentor listening to a teacher’s idea, encouraging initiative, choice over work tasks and involving teachers in decision making (Deci & Ryan, 2002). PAS has been shown to positively predict teachers’ intrinsic needs satisfaction at work leading to enhanced psychological functioning in the workplace (Collie, Shapka, Perry, & Martin, 2016). An investigation of the three domains of self-determination theory, autonomy, relatedness and competence, reported that PAS predicts relatedness to both peers and students (Klassen, Perry, & Frenzel, 2012). However, the study also found that only relatedness to students predicts work engagement and that this was invariant between primary and secondary teachers (Klassen et al., 2012).

A Conceptual Model of Adaptive Teaching for Student Critical …

15

Teacher Adaptive Practices In Chap. 1, the adaptive teacher was defined in this study as one who notices students, responds appropriately to what they notice and fosters creative and critical thinking in students. The noticing aspect is linked in this review to the extensive literatures on formative assessment. The appropriate response of the teacher is reviewed through the extant adaptive teaching research, whilst research on the fostering of student creative and critical thinking in the classroom is critically reviewed in the last section.

Formative Assessment Formative assessment is the term used in this study to represent the teacher behaviour of noticing. It is reviewed here alongside its synonyms of assessment for learning and feedback. Assessment for learning is the less theoretical, more practice-based approach to assessment than its synonyms, formative assessment and feedback. It was defined by the UK Assessment Reform Group (2002, p. 2) as “…the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there”. This definition is very similar to Hattie and Timperley’s (2007) conceptualisation of feedback as the three questions of where am I going, how am I going and where to next. Black and Wiliam’s well-cited (1856 as of 4.3.18) paper acknowledged that “the term formative assessment does not have a tightly defined and widely accepted meaning” (1998, p. 7). In their review, they defined it “as encompassing all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged” (Black & Wiliam, 1998, p. 7, 8). This vagueness in operational definition has not bothered practitioners who have been enthusiastic adopters of William’s userfriendly application of this research into teaching strategies (Wiliam, 2002). It might not have bothered the research community either if it did not come with a claim of effect sizes between 0.4 and 0.7. This review will address the definitional issues before examining the effect sizes claimed in the literature. The vague operational definition of formative assessment arises from the fact that it attempts to depict an educational practice that involves an assessment and its uses. Dunn and Mulvenon (2009) likened this to describing a hammer and its multiple uses with one definition. Instead, they argued for separating assessment and evaluation as distinct research constructs and educational practices. In this definition, a summative assessment could be used formatively, and a formative assessment could be used in a summative manner. The same critique might be levelled at assessment for learning which encompasses both the assessment and the use of its results. Feedback does not suffer from the same problem as it defined by the interaction between teacher and student without including an assessment. Feedback has also been helpfully categorised by Hattie

16

2 Literature Review

and Timperley (Hattie & Timperley, 2007) into four different levels of learner, task, task processing and self-regulation. This allows researchers and practitioners greater definitional clarity for their respective purposes, whether calculating effect sizes or improving students’ learning in the classroom. The vague operating definitions of formative assessment and assessment for learning are one reason why there is a significant critique of the claim that formative assessment has an effect size on student achievement between 0.4 and 0.7. More commonly, it is reported in the literature as 0.7 with a citation to Black and Wiliam’s “seminal” or “groundbreaking” 1998 paper. This received wisdom citation approach precludes a closer examination of the studies included in the original meta-analysis. These studies included a broad range of strategies that were included under the construct of formative assessment because of the vague operational definition, but there are also methodological problems as well (Dunn & Mulvenon, 2009; Kingston & Nash, 2011). Kingston and Nash claim “Despite many hundreds of articles written on formative assessment, we were able to find only 42 usable effect sizes from 1988 to the present” (Kingston & Nash, 2011, p. 33). This rigorous paring down of the sample resulted in the 0.7 effects size being reduced “to a weighted mean effect size of .20” (Kingston & Nash, 2011, p. 33). This critical examination of the findings of the meta-analysis conducted for the original Black and Wiliam study has been criticised for operating out of the discommensurate paradigm of traditional testing (Filsecker & Kerres, 2012) which might be entertained if an effect size for a poorly defined construct was not the main finding. Indeed, both Dunn and Mulvenon as well as Kingston and Nash found merit in Black and Wiliam’s critical review of the literature on an emerging construct. They just would have liked to read a conclusion that conceded that more research was needed in this area to examine an effect size of this magnitude. The status of the Black and Wiliam paper in the field of formative assessment has compounded the original problem as many of the 1856 citations have used the paper as a foundation to their argument rather than as a point of critique. The construct of feedback provides a more useful foundation from which to examine adaptive teaching. Feedback is defined as “information produced by an agent (e.g. teacher, peer, book, parent, self, experience) regarding aspects of one’s performance and understanding” (Hattie & Timperley, 2007, p. 81). It has an effect size of 0.73 according to Hattie’s synthesis of meta-analyses (Hattie, 2011, p. 173). It must be noted here that Hattie’s method of calculating effect sizes via synthesis does not meet the approval of all psychometricians (Bergeron & Rivard, 2017), but this review is more interested in the operational definition than the impact. Hattie and Timperley (2007) do get credit in the literature for their role in differentiating between the impact of the four different levels of feedback (Kingston & Nash, 2011). This review is interested in the conceptualisation of teacher feedback that is an immediate response on the part of the teacher to students’ thinking. This responsiveness has also been regarded as critical to the next construct examined in this review and adaptive teaching.

A Conceptual Model of Adaptive Teaching for Student Critical …

17

Adaptive Teaching Adaptive teaching, in common with assessment for learning, has enjoyed more credibility as a practice-based endeavour than as a research construct. It also suffers from a lack of consensus on a common term although there is agreement that adaptive teaching, by any other name, involves a response to a range of stimuli provided by their students. This review will first examine the operational definitions of adaptive teaching using the framework of stimuli and response. Next, the review will examine the impact of adaptive teaching on students before looking at the contexts in which it may or may not occur. Adaptive teachers recognise the reality that lessons rarely proceed as planned. Adaptive teachers can exploit the unplanned moment for their pedagogic potential (Vaughn, Parsons, Burrowbridge, Weesner, & Taylor, 2016). This moving back and forth between the script and the students’ learning needs requires knowledge of suitable pedagogy, knowledge of students and the confidence to deviate from the plan mid-lesson. The interpretative, qualitative research that dominates the emergent field of adaptive teaching has provided many practical examples of what adaptive teaching is. An adaptive teacher responds to discrepant stimuli provided by students by “questioning, assessing, encouraging, modeling, managing, explaining, giving feedback, challenging, or making connections” (Parsons et al., 2017, p. 27). These teacher practices might well occur without the stimuli, so it is the combination of student stimuli and effective teacher response that defines adaptive teaching. The effectiveness of the response is an aspect of the current research that is underdeveloped. At present, the claim is that “Teachers’ adaptive instructional actions appear to lead to enhanced student learning, motivation, and behavior” (Parsons et al., 2017, p. 27). An exemplar of the stimuli and response approach to research into adaptive teaching is the one led by Margaret Vaughn and Seth Parsons. They have developed a coding schema for the stimuli and response of adaptive teaching (Vaughn et al., 2016). The coding schema has seven codes for teacher responses and nine stimuli. The seven adaptations are: introduces new content; inserts a new activity; omits a planned activity, provides a resource or example; models a skill or inserts a mini lesson; suggests a different perspective to students; pulls a small group; conducts an individual conference, or changes grouping structure. (Vaughn et al., 2016, p. 261)

The nine reasons teachers offer for making adaptations are: to address student misunderstanding, to challenge, elaborate, or enhance student understanding, to teach a specific strategy or skill, to help students make connections, uses knowledge of student(s) to alter instruction, in anticipation of upcoming difficulty, to manage time or behavior, to promote student engagement or involvement and to follow student interest, curiosity, or inquiry”. (Vaughn et al., 2016, p. 261)

There are many claims that adaptive teaching enhances student motivation and learning, but the warrants for these are not strong. Of the 64 studies reviewed in

18

2 Literature Review

the recent review, 15 had quantitative data. Seven of these were experimental/quasiexperimental, and six were described as mixed methods (Parsons et al., 2017). The review found only two studies that had discernible effect sizes, but these were not transparent enough to report as an impact measure (Parsons et al., 2017). The 49 qualitative studies reviewed focused more on the socio-historical expressions of adaptive teaching in classroom teaching. These studies have not yielded impact measures but have provided a plethora of generative findings that have informed teaching practices. A teacher’s confidence to deviate from the plan is influenced by environmental factors. A test-driven, back-to-basics curriculum encourages a school climate where teachers focus on what needs to be assessed and reported instead of responding to the diverse needs of learners (Au, 2008). This is has been described as teachers adopting defensive pedagogies (Lingard, 2010) that minimise their risk of being exposed to accountability pressures for poor test results. The implications for diverse students are not good in countries like Australia with a widening equity gap (Loughland & Sriprakash, 2014). School-level factors can also play a part in whether teachers feel confident to be adaptive. A recent review found seven studies that reported that “Context or Instructional Approach was an affordance for adaptability” (Parsons et al., 2017, p. 19). Interestingly for this study, the review cited Griffith et al. (2013 cited in Parsons et al., 2017, p. 19–20) who “found that the school context determined the amount of instructional autonomy allowed to teachers, and teachers with more autonomy made more instructional decisions based on student needs”. This finding supports the inclusion of a teacher’s perceived autonomy support as an environmental determinant of adaptive teaching in the model under investigation in this study. The extant research programme in adaptive teaching has resulted in the publication of well-understood practical examples of both the stimuli and recall of adaptive teaching and the contexts in which they might occur. The history of the adaptive teaching research programme can be likened to a mixed method study where the qualitative exploration of an emerging, ill-defined construct has preceded the quantitative study that sharpens the operational definition of it. The proponents of the research programme concur: “Given the importance of adaptive teaching, researchers need to work on creating measures and presenting evidence that are valid and reliable so the construct can be studied on a large scale” (Parsons et al., 2017, p. 28). The creation of a valid and reliable improvement measure is the objective of this study.

Teacher Behaviours that Promote Student Critical and Creative Thinking The theory of social cognition that informs the model under investigation in this study focuses attention on the interaction between the personal, environmental and behavioural determinants of adaptive teaching that potentially creates the space for student critical and creative thinking in the classroom. More is understood at present about the personal and environmental determinants of this model than the teaching behaviours. The knowledge base on teaching behaviours is biased towards teacher

A Conceptual Model of Adaptive Teaching for Student Critical …

19

behaviours that occur outside the classroom such as planning and reflection. Less is understood about the teacher behaviours within the classroom that promote student critical and creative thinking. The pedagogical literature is replete with implications for the effective design of a curriculum that promotes student critical and creative thinking. These implications include the need for less planning to provide the space for uncertainty in the classroom (Beghetto, 2017) as well as many exhortations for teachers to encourage the so-called twenty-first-century learning skills among their students (National Education Association, 2010). These skills of creativity, critical thinking, collaboration and communication have been recently augmented by the inclusion of confidence, curiosity, commitment and craftsmanship (Claxton & Lucas, 2016). Planning for these skills has also been guided by assessment heuristics such as the creativity wheel (Lucas, Claxton, & Spencer, 2013). The model under investigation in this study focuses on teacher adaptive practices in the classroom that promote student critical and creative thinking. Therefore, research that has examined the classroom implementation of pedagogical innovations such as the creativity wheel is of interest: At five schools, teachers talked about impacts of the trial on their practice, such as more listening to (and questioning of) pupils in order to notice imaginative behaviour; more praise and encouragement of pupils; more time for reflection; and more planning for imagination. Planning opportunities for imagination into lessons and into wider schemes-of-work was the most common change teachers mentioned. (Lucas et al., 2013, p. 23)

The bias towards behaviours that occur outside the classroom is evident in the quote. This study recognises the integral role of planning and reflection in adaptive teaching, but its focus is on teacher adaptive practices that occur within the lesson. The research hypothesis and questions for the study are presented in the next and concluding sections of this chapter.

Research Questions and Hypotheses This literature review has critically examined the research constructs that comprise the model under investigation in this study. The model is informed by the theory of triadic reciprocal causation. Hence, the research questions for this study explore the relationship between the personal, environmental and behavioural determinants of adaptive teaching: 1. 2. 3. 4.

Does teacher self-efficacy predict teacher adaptive practices? Does perceived autonomy support predict teacher adaptive practices? Does teacher adaptability predict teacher adaptive practices? What teacher covariates predict these four constructs?

20

2 Literature Review

The research hypotheses are: • Teacher self-efficacy will predict teacher adaptability; and • Teacher self-efficacy, perceived autonomy support and teacher adaptability will predict teacher adaptive practices as either a global scale, as sub-scales of items or as individual items. TAP items will cluster in sub-scales based upon formative assessment and creativity.

Conclusion This chapter has presented a model of adaptive teaching to create the conditions for student critical and creative thinking in the classroom. It was argued that the uncertainty, change and novelty of a classroom environment where students are free to be critical and creative require personal and behavioural qualities of the teacher who can both create and respond to this environment. Three validated research constructs of teacher adaptability, teacher self-efficacy and perceived autonomy support were presented as potential personal and environmental determinants of adaptive teaching. These determinants were then conceptualised as being part of a triadic model of causation for the behavioural determinant of teacher adaptive practices. An argument was made in this chapter for the importance of external measures of adaptive teaching to complement the already solid foundation of self-rated measures. This argument was made with the acknowledgement that gathering reliable and valid data from external measures such as classroom observations of teachers is a significant challenge. This challenge is explicated in the next chapter of this book.

References Assessment Reform Group. (2002). Assessment for learning: 10 principles. London: Assessment Reform Group, Nuffield Foundation. Au, W. W. (2008). Devising inequality: A Bernsteinian analysis of high stakes testing and social reproduction in education. British Journal of Sociology of Education, 29(6), 639–651. https:// doi.org/10.1080/01425690802423312. Baard, P. P., Deci, E. L., & Ryan, R. M. (2004). Intrinsic need satisfaction: A motivational basis of performance and weil-being in two work settings. Journal of Applied Social Psychology, 34(10), 2045–2068. https://doi.org/10.1111/j.1559-1816.2004.tb02690.x. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Beghetto, R. A. (2017). Inviting uncertainty into the classroom. Educational Leadership, 75(2), 20–25. Bergeron, P.-J., & Rivard, L. (2017). How to engage in pseudoscience with real data: A criticism of John Hattie’s arguments in visible learning from the perspective of a statistician. McGill Journal of Education/Revue des sciences de l’éducation de McGill, 52(1) 237–246. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102.

References

21

Chandra, S., & Leong, F. T. L. (2016). A Diversified Portfolio model of adaptability. American Psychologist, 71(9), 847–862. https://doi.org/10.1037/a0040367. Claxton, G., & Lucas, B. (2016). Educating Ruby: What our children really need to learn. London: Crown House. Collie, R. J., & Martin, A. J. (2017). Teachers’ sense of adaptability: Examining links with perceived autonomy support, teachers’ psychological functioning, and students’ numeracy achievement. Learning and Individual Differences, 55, 29–39. https://doi.org/10.1016/j.lindif.2017.03.003. Collie, R. J., Shapka, J. D., Perry, N. E., & Martin, A. J. (2016). Teachers’ psychological functioning in the workplace: Exploring the roles of contextual beliefs, need satisfaction, and personal characteristics. Journal of Educational Psychology, 108(6), 788–799. https://doi.org/10.1037/ edu0000088. Deci, E. L., & Ryan, R. M. (2002). Handbook of self-determination research. Rochester, NY: University of Rochester Press. Dunn, K. E., & Mulvenon, S. W. (2009). A critical review of research on formative assessments: The limited scientific evidence of the impact of formative assessments in education. Practical Assessment, Research & Evaluation, 14(7), 1–11. Filsecker, M., & Kerres, M. (2012). Repositioning formative assessment from an educational assessment perspective: A response to Dunn & Mulvenon (2009). Practical Assessment, Research & Evaluation, 17(16). Gitomer, D. H., & Bell, C. A. (2016). Introduction. In D. H. Gitomer & C. A. Bell (Eds.), Handbook of research on teaching (5th ed.). Washington, D.C.: AERA. Goss, P. (2017). Towards an adaptive education system in Australia. Melbourne: Grattan Institute. Hattie, J. (2011). Visible learning for teachers: Maximising impact on learning. London: Routledge. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487. Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37. https://doi.org/10.1111/j.17453992.2011.00220.x. Klassen, R. M., Perry, N. E., & Frenzel, A. C. (2012). Teachers’ relatedness with students: An underemphasized component of teachers’ basic psychological needs. Journal of Educational Psychology, 104(1), 150–165. https://doi.org/10.1037/a0026253. Klassen, R. M., & Tze, V. M. C. (2014). Teachers’ self-efficacy, personality, and teaching effectiveness: A meta-analysis. Educational Research Review, 12, 59–76. https://doi.org/10.1016/j. edurev.2014.06.001. Liem, G. A. D., & Martin, A. J. (2015). Young people’s responses to environmental issues: Exploring the roles of adaptability and personality. Personality and Individual Differences, 79, 91–97. https:// doi.org/10.1016/j.paid.2015.02.003. Lingard, B. (2010). Policy borrowing, policy learning: Testing times in Australian schooling. Critical Studies in Education, 51(2), 129–147. Loughland, T., & Sriprakash, A. (2014). Bernstein revisited: The recontextualisation of equity in contemporary Australian school education. British Journal of Sociology of Education, 37(2), 230–247. https://doi.org/10.1080/01425692.2014.916604. Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in schools: First steps towards new forms of formative assessments. OECD: OECD Publishing. https://doi.org/10. 1787/5k4dp59msdwk-en. Maddux, J. E., & Gosselin, J. T. (2012). Self-eficacy. In M. R. Leary & J. P. Tangney (Eds.), Handbook of self and identity. (2nd ed., pp. 198–224). New York: The Guildford Press. Martin, A. J. (2012). Adaptability and learning. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 90–92). Heidelberg, Germany: Springer. Martin, A. J. (2017). Adaptability—What it is and what it is not: Comment on Chandra and Leong (2016). American Psychologist, 72(7), 696–698. https://doi.org/10.1037/amp0000163.

22

2 Literature Review

Martin, A. J., Nejad, H., Colmar, S., Liem, G. A. D., & Collie, R. J. (2015). The role of adaptability in promoting control and reducing failure dynamics: A mediation model. Learning and Individual Differences, 38, 36–43. https://doi.org/10.1016/j.lindif.2015.02.004. Martin, A. J., Nejad, H. G., Colmar, S., & Liem, G. A. D. (2013). Adaptability: How students’ responses to uncertainty and novelty predict their academic and non-academic outcomes. Journal of Educational Psychology, 105(3), 728–746. https://doi.org/10.1037/a0032794. Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50, 370–396. National Education Association. (2010). Preparing twenty-first century students for a global society: An educator’s guide to the “four Cs.”.. Retrieved from http://www.nea.org/assets/docs/A-Guideto-Four-Cs.pdf. Parsons, S. A., Vaughn, M., Scales, R. Q., Gallagher, M. A., Parsons, A. W., Davis, S. G., … Allen, M. (2017). Teachers’ instructional adaptations: A research synthesis. Review of Educational Research, 0(0), 0034654317743198. https://doi.org/10.3102/0034654317743198. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological monographs, 80(1), 1–28. https://doi.org/10.1037/h0092976. Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17(7), 783–805. Tschannen-Moran, M., Hoy, A. W., & Hoy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68(2), 202–248. https://doi.org/10.3102/00346543068002202. Vaughn, M., Parsons, S. A., Burrowbridge, S. C., Weesner, J., & Taylor, L. (2016). In their own words: Teachers’ reflections on adaptability. Theory into Practice, 259–266. https://doi.org/10. 1080/00405841.2016.1173993. Webfinance Inc. (2018). Business Dictionary. Adaptability. Retrieved from http://www. businessdictionary.com/definition/adaptability.html. Wiliam, D. (2002). Embedded formative assessment. Bloomington: Solution Tree Press. Williams, G. C., & Deci, E. L. (1996). Internalization of biopsychosocial values by medical students: A test of self-determination theory. Journal of Personality and Social Psychology, 70(4), 767–779. https://doi.org/10.1037/0022-3514.70.4.767.

Chapter 3

Classroom Observation as Method for Research and Improvement

Abstract Classroom observation as a methodology is not without its critics. This critique ranges from epistemological arguments to validity issues with its controversial application as an evaluation measure of teacher effectiveness. On the methodological front, there are significant reliability and validity threats when classroom observation is used in both educational research and teacher evaluation (Harris in Carnegie Knowledge Network Brief 5, 2012). This chapter acknowledges this critique and proposes a third way for classroom observation in teacher improvement. The improvement agenda disciplines the classroom observation and moves it away from pure research or evaluation (judgement of performance) to helping teachers improve their practice. This position is supported by the argument approach to test validation endorsed by the AERA, APA and NCME. Keyword Classroom Observation · Validation · Improvement Measures

The Epistemological Challenge to Classroom Observation Data The epistemological challenge to classroom observation is the critique that the researcher can never be sure that their data represent the targeted research construct or are a function of the instrument (Baird, Andrich, Hopfenbeck, & Stobart, 2017). This critique is not dissimilar to the broader category of the critique of logical positivism in science made famous by Popper’s claim that it only takes one black swan to refute the proposition that all swans are white. This leads to Popper’s concept of falsifiability where no amount of experiments can prove a theory but one experiment can contradict it (Popper, 1959). Falsifiability provides a way forward for theories built upon empirical data as the evidence is laid out for others to refute. An argument approach to instrument validation embraces the empirical rigour of falsifiability. The next section of the chapter explains how this argument approach to validation can provide transparent evident that is falsifiable.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_3

23

24

3 Classroom Observation as Method for Research and Improvement

The psychological constructs in this study are regarded as pragmatic metaphors rather than true representations of the phenomena in question (Fried, 2017). This study recognises that classroom observation as a methodology will only capture a construct that then can be used as a metaphor in a learning improvement model for teacher professional learning.

An Argument Approach to Instrument Validation The developers of classroom observation instruments may no longer make the claim that their instrument has inbuilt, context-free validation based on the publication of one dataset. An argument approach to validation “makes the warrants and claims for an instruments’ validity explicit, separate, and clear” (Bell et al., 2012, p. 24). This approach to validation is defined by the authoritative standards for educational and psychological testing (AERA, APA, & NCME, 2014) as an unitary conceptualisation of validation, “the 1999 edition of the Standards highlighted that validity and reliability were functions of the interpretations of test scores for their intended uses and not of the test itself” (Plake & Wise, 2014, p. 4). An argument approach to unitary validation requires evidence collected on the test content, the response processes, the internal structure, the relationship to other variables and the intended and unintended consequences of the test (AERA et al., 2014). The AERA test standards emphasise that the validation evidence needs to refer to the inference created from designers of the test. The credibility of this inference is dependent on the fidelity of the implementation of the test to the published protocols. Validity operationalised as the ongoing collection of evidence rather than a fixed quality has important implications for classroom observation instruments (or tests). These implications pertain to both the epistemological and methodological challenges of classroom observation. The argument approach to validation with the continuous gathering of validation evidence provides an opportunity for falsifiability by a researcher’s peers when this evidence is published in peer-reviewed journals. It can generate models that are wrong but useful (Box, 1976) and establish theories that can be refuted with counterevidence. Classroom observation is not a perfect method, but neither is any other method. An argument approach to validation puts the onus on the researcher to provide an audit trail that can be replicated and interrogated. An argument approach to validation is also significant when classroom observation instruments are used to gather evidence for teacher evaluation: “serious consequences arise when measures are not valid and reliable: they increase classification errors—the placement of teachers into incorrect performance categories” (Harris, 2012, p. 4). Researchers in the academy may argue that they are removed from such harsh realities, but there are also profound consequences when evidence for arguments made in published research is based on the unstable foundations of unreliable and invalid inferences made from classroom observation data.

The Methodological Challenges of Classroom Observation

25

The Methodological Challenges of Classroom Observation Classroom observation research involves methodological challenges if it is used as either a research or evaluation measure. The recent political focus on the highstakes inferences made from classroom observations has brought these challenges into public debate which is a positive outcome. These challenges require a rigorous response on the part of the researcher to the well-known validity and reliability threats associated with classroom observation as a measure. The next part of the chapter previews the high-stakes context of classroom observation before examining the validity and reliability processes undertaken by two of the more credible classroom observation instruments developed in the USA. This validation evidence for the two research instruments is compared to the evidence for the teacher adaptive practice scale generated for the different purpose of creating a learning improvement measure.

The High-Stakes Context of Classroom Observation Classroom observation of teachers has become more prevalent in annual performance review of teachers in Australia in recent years. This is due to the confluence of compulsory teacher accreditation in Australia and the currency of the teacher effectiveness paradigm most commonly associated in Australia with the work of Professor John Hattie. This move in Australia follows the very high-profile US Measures of Effective Teaching project (MET Project, 2013) that employed classroom observation as one method among others to gather evidence to evaluate teacher effectiveness. This high-stakes deployment of classroom observation for teacher evaluation has implications for the developers of teacher observation instruments. They need to be explicit about the inferences that can be made from the use of their instrument and under what conditions these inferences can be made. This requires the provision of standardised protocols to ensure that the instrument is used as the developers intended. The teacher accreditation regime in Australia has created a demand for teacher observation instruments with requirements for teachers to produce indirect and direct measures of teacher performance (AITSL, 2013). The indirect measures are referee reports and annotated artefacts whilst teacher observation is the direct measure. This becomes a high-stakes game at the two highest levels of accreditation in Australia, Highly Accomplished and Lead Teacher, because the observations need to be conducted by an external as well as an internal examiner. The increasing influence of the teacher effectiveness paradigm in education is another factor behind the increasing demand for teacher observation instruments. The teacher effectiveness paradigm rests on the assumption that effective teaching practices have the greatest impact on student achievement within the classroom (Hattie, 2012). A logical extension of this assumption is to employ lesson observation as a measure of teacher effectiveness. This, however, is not without its challenges as

26

3 Classroom Observation as Method for Research and Improvement

research reports from both the UK and the USA have identified a tension between the evaluative and professional learning functions of lesson observation. It has been claimed that the use of graded lesson observation for performance management in the further education sector in the UK has negated its potential as a professional learning strategy (O’Leary & Wood, 2016). A US study made a useful distinction between teacher supervision as formative assessment focusing on teacher growth and evaluation as summative assessment related to compliance (Derrington & Kirk, 2016). Another US study concluded that it would be useful for administrators to distinguish between the evaluative and professional learning functions of classroom observation (Conley, Smith, Collinson, & Palazuelos, 2016). The use of classroom observation instruments for teacher evaluation is still in a developmental phase in Australia. The national accreditation process in Australia requires that classroom observation instruments used for evaluation must link to the relatively new Australian Professional Standards for Teachers (AITSL, 2011). This means that any observation guides or instruments used in Australia prior to 2011 are potentially redundant unless they can re-calibrate their constructs to the national standards. As a result, there are no examples of validation evidence relating specifically to classroom observation instruments published in Australian educational research. Instead, we turn to the US research for validation evidence of classroom observation instruments. In the USA, the Framework for Teaching (Danielson, 2013) and the Classroom Assessment Scoring System (Pianta, 2011) instruments represent a significant body of research in this area. These instruments were both created in the USA and underwent extensive analysis through the Gates Foundation’s Measures of Effective Teaching project (Kane & Staiger, 2012). This evidence provides some confidence that these instruments provide a strong point of comparison for the TAP scale that is the focus of this chapter. The next part of the paper presents the validation evidence provided by both frameworks for their research instruments and compares it to the evidence generated for the teacher adaptive practice scale which is primarily a learning improvement measure.

Test Content The Classroom Assessment Scoring System (CLASS) and the Framework for Teaching (FFT) are both instruments that have been in operation for many years, so it is to be expected that validation evidence for the initial content for the instrument was generated and that this has been re-established over time. The designers of the larger CLASS system validate their content through links to salient research literature that links the teacher behaviours in CLASS to outcomes of interest (Bell et al., 2012). With successive iterations of the CLASS, for example the customised secondary CLASS-S instrument, they have also reported the use of expert groups to validate the test content (Bell et al., 2012).

The Methodological Challenges of Classroom Observation

27

The FFT protocol was developed from Danielson’s initial framework for teaching in 1996 (Danielson, 1996). Danielson describes the process of the test content being updated with successive iterations of the FFT, using a review of education research in the 2007 edition that “was fully described in its Appendix, The Research Foundation” (Danielson, 2013, p. 3), as well as 2013 edition that responded to the “instructional implications of the Common Core State Standards” (Danielson, 2013, p. 5). Both the FFT and CLASS underwent an extensive evaluation of their content in the MET study with one study able to claim that the FFT “tends to provide the most coverage of elements within a given dimension, suggesting that it may offer a more comprehensive assessment of instructional practice than other instruments” (Gill, Shoji, Coen, & Place, 2016, p. 14). The simple fact that both instruments were involved in the MET study provides evidence of content validity that the broader professional community might accept on face value. In summary, it appears that both instruments have published validation evidence on the content of their instruments over time. The writing of indicators for the adaptive practice observation instrument commenced with the three behavioural items from the teacher adaptability scale (Collie & Martin, 2016). The process began with these items because they describe adaptability behaviours that could be observed in contrast to the cognitive and affective items. The three items are: • I am able to seek out new information, helpful people, or useful resources to effectively deal with new situations. • In uncertain situations, I am able to develop new ways of going about things (e.g. a different way of asking questions or finding information) to help me through. • To assist me in a new situation, I am able to change the way I do things if necessary. The last two items were more relevant when applied to the context of a teacher conducting a lesson alone in a classroom. The author then conducted a series of classroom observations followed by focus groups to generate a draft list of indicators that are listed in Appendix 1. The initial twenty classroom observations provided useful validation evidence on the draft adaptive practice scale. The first validation argument made was that some of our initial indicators (learning intentions and success criteria, multiple modalities) were able to be designed before the lesson rather than being an adaptive practice in the lesson. In the interest of clarity, these items were pared from the list in the first round. The second validation argument constructed from these first rounds of observations related to the overall purpose of the scale. We learnt to distinguish adaptive practices that occurred for instructional or managerial efficiency in contrast to those adaptive practices that promoted student thinking. In effect, this was a declaration that the focus of our adaptive practice scale was on student creative and critical thinking. Focus groups of expert teachers were conducted at the same time as the classroom observations. These focus groups were comprised of the teachers that had been observed that day. In these focus groups, the teachers were asked their considered professional opinion on whether our draft indicators constituted adaptive practices that lead to creative and critical thinking on the part of the students. In one of these

28

3 Classroom Observation as Method for Research and Improvement

focus groups, a teacher summarised the indicators as “it’s when the teacher follows the student and not the script”. Other comments related to school contexts where adaptive practices might be enabled such as in team teaching of integrated cross-curricular units. There was also a commentary on adaptive practices being a developmental milestone not available to early career teachers. These last two arguments forced us to think about the focus of our indicators, and we decided that we would pursue an instrument for classroom observation of individual teachers at all stages of their career from graduate to lead teacher. This was in recognition of Hattie’s (2003) study that years of teaching experience did not correlate with expertise. The outcome of the initial observations and focus groups was the indicators in Appendix 2. These indicators were verified as effective practices for student learning using the AITSL classroom practice continuum (2014) and Hattie’s (2012) metaanalyses. This verification process provided enough extrapolation evidence that our indicators represented what the literature regarded as being quality teaching practices that lead to improved student outcomes. All the adaptive practice indicators describe teacher behaviours that occur in response to an on-the-spot assessment of student learning. In Hattie’s parlance, this is described as feedback that has an overall effect size of 0.73 (Hattie, 2012, p. 173). This effect size is large, and there are questions over Hattie’s analysis of metaanalysis, but at least the term feedback does have a more precise definition in the literature than the more generic formative assessment. The operational definition is interesting as it is not feedback in the commonly understood direction of teacher to student but the reverse: When teachers seek, or at least are open to, feedback from students as to what students know, what they understand, where they make errors, when they have misconceptions, when they are not engaged- then teaching and learning can be synchronized and powerful. (Hattie, 2012, p. 173)

Specifically, indicators 1–6 and 12–15 seem to embody the required shift from teacher delivery of content to the monitoring of student learning suggested by Hattie’s practice of feedback (see Appendix 3). The TAP definition adopted in this research study depicts this flow of feedback from the students to the teacher as teacher noticing. The subsequent response of the teacher to this student feedback is their adaptive practice. The classroom practice continuum (CPC) developed by Professor Patrick Griffin for AITSL (2014) provided further extrapolation evidence that our adaptive practice indicators constitute what the literature regard as teacher practices that lead to student outcomes of interest. The adaptive practice indicators in Appendix 2 correspond with descriptions that are clustered at the higher end of the CPC. Indicator three of our scale is taken directly from level six of the CPC whilst indicator two is a close match to another sentence from level six (see Appendix 3). Indicators 1, 4, 5, 6, 7, 8, 12 and 13 in our scale can be linked to descriptors in levels 4–6 of the CPC (see Appendix 3). Indicators 9–11 in our nascent adaptive practice scale could not be directly linked to Hattie’s work or the CPC, but their presence in our classroom observations and focus groups warrants their inclusion in the scale at this stage.

The Methodological Challenges of Classroom Observation

29

Relationship to Outcomes of Interest The CLASS and FFT instruments benefited from being included in the Measures of Effective Teaching evaluation programme conducted in the USA from 2009 to 2011 (MET Project, 2013). The FFT and CLASS were both shown to be able to predict improvement in student value-added achievement measures in a sample of 23,000 videotaped lessons (Danielson, 2013; Gill et al., 2016). The creators of the CLASS suite of instruments have been diligent in the collection and publication of rigorous validation evidence to measure the impact of their CLASS-S professional learning intervention on student achievement. Secondary teachers demonstrated generalised, sustained improvement in teacher practice with average increase on student achievement from the 50th to the 59th percentile in the post-intervention year. The researchers were understandably a little bit pleased with this result: That these effects on teachers carried into the next year and new students, when there was no coaching and 30% of the teachers were teaching at least slightly different content material than in the first year, suggests that effects were driven by enduring change to the teacher and to the classroom as a behavior setting, not by student effects limited to the intervention year and class. (Allen, Pianta, Gregory, Mikami, & Lun, 2011, p. 1036)

Another study of the use of CLASS-S as a professional learning intervention for middle and high school teachers found in post hoc analyses that a constellation of classroom interactions involving emotional and intellectual support “could account for a difference in student achievement test performance spanning the 37th–63rd percentiles” (Allen et al., 2013, p. 93). As the authors note, this is a difference of such magnitude that it demands attention from all stakeholders involved in secondary education (Allen et al., 2013). The relationship of the TAP scale to outcomes of interest is the focus of the evidence reported in chapter four of this book. However, a brief review of the underlying theoretical framework of these outcomes is warranted here. The research model in this study links the personal determinants of teacher self-efficacy and teacher adaptability to the environmental determinant of perceived autonomy support to investigate a possible relationship with teacher adaptive practices. The long-term goal of this research project is to link this model of adaptive teaching to affordances for student creative and critical thinking in the classroom.

Internal Structure Both CLASS and FFT have three-level hierarchical structures that link their broader research constructs at the top to indicators at the bottom level that can be observed in the classroom. CLASS has three domains, each with their own dimensions and indicators whilst FFT has four domains, each with their own components and elements.

30

3 Classroom Observation as Method for Research and Improvement

There is more published evidence supporting the validity of the CLASS domain structure than there is for the FFT. The latent structure of the CLASS dimensions and domains has been validated across grades and content areas (Hamre, Pianta, Burchinal, & Downer, 2010). The continuous approach to instrument validation taken by the CLASS project was exemplified when they developed the CLASS-Secondary (CLASS-S) instrument. The process was rigorous: Support for the domain structure was empirically derived. CLASS-S developers first conducted exploratory factor analyses with data from another large observation study using CLASS-S. They then confirmed that structure across multiple samples. (Bell et al., 2012, p. 73)

The publication of this evidence provides confidence that the domain structure of the CLASS-S instrument is suitable for the secondary school classroom. Regarding the FFT, it was claimed that in 2009, “that more research is needed to confirm its psychometric integrity, but several studies have begun this work” (Pianta & Hamre, 2009, p. 111). It is difficult to locate validation evidence that relates to the latent structure of the FFT which doesn’t mean they it does not exist. There are 14 research reports on their website but nearly all of these provide validation evidence of links to outcomes of interest (The Danielson Group, 2013). This may be because the intended audience of the website is the end-user of the FFT rather than the research community. Exploratory and confirmatory factor analyses were conducted on the 14-item TAP scale (see Appendix 4), and three dimensions were extracted: • TAP A—items 5, 6, 8 • TAP B—items 2 and 3 • TAP C—items 1 and 4. The results of the regression analysis are discussed in the results presented in the next chapter.

Standardised Protocols An argument approach to validation requires instrument designers to make their implementation procedures transparent. Useful validation evidence on the implementation procedures includes “detailed descriptions ranging from how observers are calibrated to the ways in which lessons are sampled over a school year” (Bell et al., 2012, p. 24). Analysis of these types of evidence “could facilitate more careful use of observation protocols for consequential purposes” (Bell et al., 2012, p. 24). Standardised protocols for classroom observation instruments increase their reliability. The greatest risk with classroom observation is low inter-rater reliability (Harris, 2012), so most of the protocols are designed to counter this risk. The minimum standardisation processes involve “training protocols, observation protocols, scoring directions” (Stuhlman, Hamre, Downer, & Pianta, 2014, p. 6).

The Methodological Challenges of Classroom Observation

31

The training manuals for both CLASS and FFT instruments are locked behind paywalls, but there is enough published evidence in journal articles and reports to ascertain what each framework regards as critical in these three areas. The importance of training protocols to the FFT is evident in the following excerpt from their website: We highly recommend training for all teachers, school leaders, and staff using the Framework. Developing a common understanding is critical to accuracy, teaching advancement, and the Framework’s impact on students’ core learning. (The Danielson Group, 2013, par. 4)

The CLASS project has generated a commercial offshoot in its My Teaching Partner suite of professional development programmes (Curry School of Education University of Virginia, 2018). The CLASS instrument is only accessible to people who have completed the mandatory training that is offered only in North America. Therefore, the CLASS is exclusive, thus providing some guarantee of the fidelity of its implementation whilst the FFT remains open and more at risk of end-user infidelity.

Training Protocols Both projects emphasise the importance of training to the reliability of the instrument. CLASS-S employs their vast databases of classroom video so that novice observers can calibrate their scoring with those of a master observer (Bell et al., 2012; Mashburn, Meyer, Allen, & Pianta, 2014). The FFT also uses a master observer in their training, but the process involves observation and discussion of ratings using the framework. The author of the FFT candidly admits that sometimes this is not enough, “Our findings have been somewhat humbling; even after training, most observers require multiple opportunities to practice using the framework effectively and to calibrate their judgments with others” (Danielson, 2011, p. 38). In summary, both projects invest considerable resources in ensuring observers understand and can use their respective frameworks in a reliable manner. The CLASS-S project’s videoaided training does have the advantage of addressing rater drift over time through compulsory annual re-certification for its accredited observers. Training protocols were not relevant to this study as the author conducted 81.3% of the observations with only two other researchers conducting the remainder under their direct supervision. They will become important if the TAP scale is adopted more widely.

Observation Protocols There are similarities and differences in the observation protocols for the CLASS and FFT instruments. Both regard the observer’s description as the primary data. Both emphasise that observers should not make inferences or judgments at this descriptive phase. CLASS requires their observers to make descriptive notes in 15min cycles followed by 10 min of coding these against the indicators. This is suited

32

3 Classroom Observation as Method for Research and Improvement

to their observation process that is primarily focused on videotaped recordings of classrooms. The protocols for the FFT are benefited from the MET study where it was necessary to train many observers to score the 23,000 videotaped lessons (Danielson, 2013). This reality provoked the designers to tighten the rubric language, write performance descriptions at the higher component rather than element level, create critical attributes to assist in threshold adjudications and provide examples of performance for each component (Danielson, 2013). This study appropriated aspects of the describe/code sequence used in both the CLASS-S and instructional rounds protocols (City, Elmore, Fiarman, & Teitel, 2011; Pianta, Hamre, & Mintz, 2012). Specifically, the 15-min observation and 10-min coding sequence from CLASS-S were used as a model for the 15/5 protocol adopted for the teacher adaptive practice scale in this study. The adoption of this protocol has positive implications for the validity of the inferences made from the observation measurements. The implications relate to the accuracy and quality of the observations and subsequent coding against the indicators. The short 15-min period of observation increases the accuracy of the observations. The authors of the CLASS-S manual argue that this prevents the potential bias of the last-seen event in a longer observation being more significant than an event observed at the beginning (Pianta et al., 2012). The frequent coding of observations for 5 min at 15-min intervals leads to a focus on the quality of practices observed rather than their frequency (Pianta et al., 2012). A longer observation period may lead to a bias towards events that are more frequent. The shorter cycle forces the observer to make frequent analysis of the quality of the events observed at 15-min intervals.

Scoring Directions Both CLASS and the FFT provide extensive scoring directions in the form of qualitative descriptors for each level of attainment. The FFT uses a four-point scale of unsatisfactory, basic, proficient and distinguished. Each of the four points on the scale has a descriptor, critical attributes in dot points and examples of teacher behaviours. The CLASS framework employs a seven-point scale with extensive guidance given for the low (1–2), medium (3–5) and high (6–7) ranges for each indicator. The protocols are insistent in demanding that observers look at the molar level of interactions rather than using a time-sampling or a count of discrete behaviours (Allen et al., 2013). Their justification for this resides with “principles of developmental psychology that suggest the importance of a focus on the broader organization of molar patterns of behavior as a means to get at subtle processes not easily captured via counts of discrete behaviors” (Allen et al., 2013, p. 78). This extensive scaffolding of the scoring process for both instruments indicates an acknowledgement of the risk of low inter-rater reliability inherent in classroom observations (Harris, 2012). It also positions the scoring process as a deliberative, reflective process instead of a rudimentary tick the box procedure.

The Methodological Challenges of Classroom Observation

33

A scoring guide was prepared for the TAP scale (Appendix 4) based on CLASS (low and high levels only for a 5-point scale instead of high, medium and low for a 7-point scale) and examples (FFT). The author also took the lead of CLASS in examining what they regard as molar behaviours rather than time sampling or a doing a frequency count of discrete behaviours. The CLASS team justify their focus on molar behaviours because teacher–student interactions are sometimes comprised of subtle processes (Allen et al., 2013). This focus is applicable to this study where the aim was to rate teacher adaptive practices that were a response to student thinking. These behaviours were at times quite complex interactions that needed to be viewed holistically over the 15-min observation period.

Consequences The test standards (AERA et al., 2014) refer to the validation requirement that researchers be vigilant for any unintended consequences of their test. In this respect, there is more published evidence of the unintended consequences of the FFT than the CLASS instrument. The wider political exposure of the FFT in the USA may have led to some critique of its implementation as a teacher evaluation measure. Initially, most of the critique might have been categorised as a legitimate response to the proposed performance review measures by teachers’ unions, but Danielson herself has come out recently as a critic of the implementation of the framework (Danielson, 2016). Not surprisingly, the question of inter-rater reliability is at the centre of her critique. She argues that many school administrators do not have enough training or adequate supervision in the use of the protocols. Furthermore, she argues that the lack of differentiation in the application has contributed to the inference that the framework was for the detection of low-performing teachers (Danielson, 2016). This contrasts with the intended inference that “having a common language to describe practice increases the value of the conversations that ensue from classroom observations” (Danielson, 2011, p. 37). The representation of the framework as a punitive performance review measure was an unintended consequence of its implementation in the USA. The incorrect use of the observation protocols is also a risk identified by the CLASS framework. In their validation studies, observers were assigned in a balanced way across the cohort of teachers ensuring that the teachers were not being observed by the same person all the time. They acknowledge that this would not be the case in school districts, “It is unlikely that districts will assign observers in the balanced way TUCC did, so this source of bias is likely to be an important validity threat when observation protocols are used outside of research studies” (Bell et al., 2012, p. 17). This bias is an unintended consequence of the broader application of the CLASS-S instrument not dissimilar to the challenges to fidelity acknowledged by Danielson in the application of the FFT.

34

3 Classroom Observation as Method for Research and Improvement

The inference of the teacher adaptive practice scale is that high scores indicate that the observed teacher demonstrated adaptive behaviours in response to student learning needs within the lesson. This is the intended consequence of the application of this instrument to teacher observation. There are, however, other unintended consequences of the study that were evident in the conduct of the classroom observations. The most obvious unintended consequence of the research was the perception that the observations were part of a teacher’s professional learning and/or performance review. The informed consent form included a clause, as is standard in such documents, that participants could request to be informed of the outcomes of the research. Some of the participants requested this feedback immediately after the lesson observation. This feedback was provided with the caveat that the observations needed to be qualified by the fact that the teacher adaptive practice scale was still in a draft version. This unintended consequence was not as potentially damaging as the perception that the research was part of an individual teacher’s performance review. The performance review as an unintended consequence of classroom observation resulted from the misinterpretation of some school executive staff concerning the scope of the research. Gaining access to schools for the obtrusive act of observing classroom teachers is challenging, and it was obvious in this second phase of the validation that some schools regarded the classroom observations as another strategy to lift the performance of some of their teachers. This was indicated by requests from senior leaders for evaluation of individual teacher’s post-observation. The senior leaders’ unfamiliarity with research ethics requirements meant school leaders were disappointed to not have access to the evaluations. The author then had to be extra vigilant in monitoring this unintended consequence by ensuring the initial communication with schools emphasised that the research was not intended to be a de facto performance review measure. This conflation of classroom observation with performance review is unfortunate but not unexpected. It is an unintended consequence that echoes the concerns expressed by Danielson (2016) about administrators using her framework solely for evaluation rather than for professional learning as was her intention. This study found that both administrators and teachers associated classroom observation with evaluation, so it may well be a generalised conflation within the teaching profession that needs to be challenged. The nascent learning improvement paradigm offers one viable way forward.

The Learning Improvement Paradigm: A Third Way for Classroom Observation? The argument approach to validation requires a continuous collection of validation evidence. This continuous generation of evidence provides an audit trail that provides an opportunity for falsifiability by the researcher’s peers, but it also opens possibilities

The Learning Improvement Paradigm: A Third Way …

35

for classroom observation to be used as an improvement measure. This offers a third way for classroom observation beyond its current uses for research and evaluation. The learning improvement paradigm offers a potential benefit to the practice of classroom observation with its advocacy of continuous, agile measures. This diligent monitoring of the scale by the end-user to evaluate an instrument’s fit for purpose fits well with the unitary definition of validation and offers an antidote to the infidelity of implementation by administrators identified by Danielson (2016). Learning improvement science advocates continuous measures of improvement because it allows the system to close the feedback loop between the measure and the action taken in response to the measure. This is where the confluence of classroom observation and learning improvement science hits a potential sweet spot for enhancing adaptive teaching. Both CLASS and FFT have their own commercial professional learning programmes that use their individual items as prompts for teacher improvement. The use of their instruments in that way has some support in the literature, “A strong rationale for this approach is that the individual items are directly anchored to specific instructional practices, whereas total scores or subscale scores may be more difficult to use for feedback” (Halpin & Kieffer, 2015, p. 263). These same researchers caution on reliability posing a risk to inferences made from individual items (Halpin & Kieffer, 2015). CLASS and its accompanying professional learning programme that uses moderated video analysis has demonstrated how a systematic classroom observation system can be used directly as a professional learning tool. Halpin and Kieffer (2015) also make some other well-considered arguments about the use of classroom observation as a professional learning, or improvement strategy. They describe their latent content analysis method for observation data as being individual-centred rather than variable centred and argue that “the latent variable can be interpreted to distinguish what is unique about teachers’ practices (i.e., signal) from the measurement error of the instrument (i.e., noise)” (Halpin & Kieffer, 2015, p. 266). Finally, they argue that an individual-centred approach frames teacher professional learning as a continuous, career-span activity rather than spasmodic bouts of development or evaluation (Halpin & Kieffer, 2015). This description aligns well with the approach of learning improvement science adopted for the development of the teacher adaptive practice scale that is the focus of this study.

Appendix 1: Initial Teacher Adaptive Practice Scale 1. 2. 3. 4. 5. 6. 7.

Learning intentions and success criteria evident Dynamic grouping Many conceptual representations used as required Act upon data gathered during concept review tasks set for students Flexible pacing Seeking student feedback Filling unexpected gaps

36

8. 9. 10. 11. 12. 13. 14.

3 Classroom Observation as Method for Research and Improvement

Literacy/Numeracy scaffolds used as required Negotiate post-lesson activities Provide more content depth as required Negotiate assessment tasks Adjust learning instructions throughout Choice of learning activity based upon agreed learning goals Content added to student suggestion.

Appendix 2: Teacher Adaptive Practice Scale 1. The teacher modifies learning goals in response to formative assessment. 2. The teacher modifies their instructions during the lesson to increase learning opportunities. 3. The teacher negotiates assessments with students, ensuring these are aligned with learning goals. 4. The teacher uses formative assessment to differentiate their responses to individual students. 5. The teacher prompts students to discover key concepts through responsive openended questions. 6. The teacher prompts students to express their thinking and used this as a springboard for learning activities. 7. The teacher uses a thinking routine to prompt deeper exploration of concepts or skills. 8. The teacher prompts students to demonstrate open-mindedness and tolerance of imaginative solutions to problems. 9. The teacher provides a synthesis of class-generated ideas. 10. The teacher links, when appropriate, lesson concepts to larger disciplinary ideas. 11. The teacher provided imaginative suggestions to increase learning opportunities. 12. The teacher demonstrates flexible pacing of lesson in response to student learning needs. 13. The teacher demonstrates responsive use of literacy/numeracy interventions. 14. The teacher creates groups of students based upon formative assessment. 15. The teacher modifies homework in response to lesson progress.

Appendix 3: Adaptive Practice Indicators Mapped …

37

Appendix 3: Adaptive Practice Indicators Mapped to Hattie (2012) and AITSL (2014) Classroom Practice Continuum

Indicator

Reference

1

The teacher modifies learning goals in response to formative assessment

CPC6: “The teacher supports students to use evidence, including prior learning experiences, in personalising and revising their learning goals and aligning them with the curriculum standards” (AITSL, 2014, p. 96) Goals 0.56 effect size (Hattie, 2012 p. 298) Feedback 0.73 effect size (Hattie, 2012 p. 173)

2

The teacher modifies their instructions during the lesson to increase learning opportunities

CPC6: “They spontaneously adjust their instructions during the lesson to increase learning opportunities and improve students’ understanding” (AITSL, 2014, p. 96) Feedback 0.73 (Hattie, 2012 p. 173)

3

The teacher negotiate assessment strategies with students, ensuring these are aligned with learning goals

CPC6: “They negotiate assessment strategies with students, ensuring these are aligned with learning goals” (AITSL, 2014, p. 96) Feedback 0.73 (Hattie, 2012 p. 173)

4

The teacher uses formative assessment to differentiate their responses to individual students

CPC6: “The teacher uses cues to differentiate between their responses to individual students throughout the learning time” (AITSL, 2014, p. 96) Feedback 0.73 (Hattie, 2012 p. 173)

5

The teacher prompted students to discover key concepts through responsive open-ended questions

CPC4: “They encourage students to justify and provide reasons for their responses to questions” (AITSL, 2014, p. 96) Significant scepticism from Hattie on purpose of teacher questioning when they already know the answer (Hattie, 2012 p. 182)

6

The teacher prompted students to express their thinking and used this as a springboard for learning activities

CPC6: “The teacher supports the students to generate their own questions that lead to further inquiry” (AITSL, 2014, p. 96) Feedback 0.73 (Hattie, 2012, p. 173)

7

The teacher uses a thinking routine to prompt deeper exploration of concepts or skills

CPC4: “They use conversation topics that generate thinking and they encourage students to justify and provide reasons for their responses to questions” (AITSL, 2014, p. 94) (continued)

38

3 Classroom Observation as Method for Research and Improvement

(continued) Indicator

Reference

8

The teacher prompted students to demonstrate open-mindedness and tolerance of imaginative solutions to problems

CPC5: “They give students time to grapple independently with the demanding aspects of open-ended tasks” (AITSL, 2014, p. 95)

9

The teacher provided a synthesis of class generated ideas

10

The teacher links, when appropriate, lesson concepts to larger disciplinary ideas

11

The teacher provided imaginative suggestions to increase learning opportunities

12

The teacher demonstrated flexible pacing of lesson in response to student learning needs

CPC4: “the teacher prompts, listens actively, monitors and adjusts instruction and assessment tasks based on feedback from students” (AITSL, 2014, p. 94) Feedback 0.73 (Hattie, 2012, p. 173)

13

The teacher demonstrated responsive use of literacy/numeracy interventions

CPC4: “the teacher focuses practice on specific skills and processes, including literacy and numeracy, in response to student needs” (AITSL, 2014, p. 94) Feedback 0.73 (Hattie, 2012, p. 173)

14

The teacher creates groups of students based upon formative assessment

Feedback 0.73 (Hattie, 2012, p. 173)

15

The teacher modifies homework in response to lesson progress

Feedback 0.73 (Hattie, 2012, p. 173)

Appendix 4: Teacher Adaptive Practices Coding Guide

Indicator

Low

High

1

The teacher modifies learning goals in response to formative assessment

Teacher did not undertake any formative assessment

Teacher checks for student understanding and makes changes to the lesson in response

2

The teacher modifies their instructions during the lesson to increase learning opportunities

Instructions are given once and in one modality to the whole class

The teacher did an impromptu demonstration to a small group using the classroom globe in response to student questions about international time zones (continued)

Appendix 4: Teacher Adaptive Practices Coding Guide

39

(continued) Indicator

Low

High

3

The teacher uses formative assessment to differentiate their responses to individual students

The teacher asks students to move to the true or false side of the room but does not follow up with why questions

Teacher sets Do Now task at the beginning of the lesson, helps students with the task and asks questions about the task when all students have attempted it

4

The teacher negotiates learning activities with students, ensuring these are aligned with learning goals

All students completed the same activity at the same time

The teacher used students’ misconceptions as a guide to the learning activity that was chosen

5

The teacher prompted students to discover key concepts through responsive open-ended questions

Teacher used shallow questions that did not require deep conceptual responses from the students

“Why is it expensive to make things in Australia?” “How has technology changed religion?” “In which direction does the water flow into the drain in the Northern and Southern Hemisphere?”

6

The teacher prompted students to express their thinking and used this as a springboard for learning activities

The teacher used ‘guess what is in my head’ questions; “It starts with…?”

The teacher asked the students to annotate their notes with an ‘E’ if they required more evidence

7

The teacher uses a thinking routine to prompt deeper exploration of concepts or skills

“The steps I would like you to take are: decode, position, read the poem, write your response”

Teacher used a “See, Think, Wonder” to prompt students to think metaphorically on a concept

8

The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty

Teacher answered big science questions directly instead of asking them why

The teacher explored the different definitions of a concept evident across different sources to demonstrate the contested and uncertain nature of it

9

The teacher provided a synthesis of class generated ideas

Teacher uses Initiate, Response, Evaluate to individual student answers

“I feel if we joined these last three responses we should have a good answer on identity”

10

The teacher links, when appropriate, lesson concepts to larger disciplinary ideas

Teacher talk focused on the execution of the learning activity rather than the underlying big idea

The teacher linked the preservation of vegetables by bottling to the chemical processes (continued)

40

3 Classroom Observation as Method for Research and Improvement

(continued) Indicator

Low

High

11

The teacher provided analogies and metaphors to increase learning opportunities

Teacher does not use analogy and metaphor when the opportunity arises

The teacher used an image of a waterfall to assist student understanding of the life cycle of a business The teacher roleplayed a character in the text to expand understanding

12

The teacher demonstrated flexible pacing of lesson in response to student learning needs

Teacher adheres to their script without checking in with students to see if they understood the concept

The duration of each learning activity is contingent on student understanding

13

The teacher demonstrated responsive use of literacy/numeracy interventions

No dynamic literacy/numeracy interventions evident

Teacher identified the word “essential” as expressing high modality Teacher used a think-aloud process to identity story retelling in literary analysis as a practice to be avoided

14

The teacher creates groups of students based upon formative assessment

Students not grouped or are in previously assigned table groups

Students moved into groups based on a self-rating of their knowledge

References AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington D.C.: AERA. AITSL. (2011). Australian professional standards for teachers. Melbourne: AITSL. AITSL. (2013). Guide to the certification of highly accomplished and lead teachers in Australia. Retrieved from Melbourne: http://www.aitsl.edu.au/docs/default-source/aitsl-research/insights/ re00051_guide-to_the_certification_of_highly_accomplished_and_lead_teachers_in_australia_ feb_2013.pdf?sfvrsn=4. Allen, J., Gregory, A., Mikami, A., Lun, J., Hamre, B., & Pianta, R. (2013). Observations of effective teacher–student interactions in secondary school classrooms: Predicting student achievement with the classroom assessment scoring system-secondary. School Psychology Review, 42(1), 76–98. Allen, J. P., Pianta, R. C., Gregory, A., Mikami, A. Y., & Lun, J. (2011). An interaction-based approach to enhancing secondary school instruction and student achievement. Science, 333(6045), 1034–1037. https://doi.org/10.1126/science.1207998. Australian Institute for Teaching and School Leadership (AITSL). (2014). Looking at classroom practice. Retrieved from http://www.aitsl.edu.au/docs/default-source/classroom-practice/ looking_at_clasroom_practice_interactive.pdf?sfvrsn=6. Baird, J.-A., Andrich, D., Hopfenbeck, T. N., & Stobart, G. (2017). Assessment and learning: fields apart? Assessment in Education: Principles, Policy & Practice, 24(3), 317–350. https://doi.org/ 10.1080/0969594x.2017.1319337.

References

41

Bell, C. A., Gitomer, D. H., McCaffrey, D. F., Hamre, B. K., Pianta, R. C., & Qi, Y. (2012). An argument approach to observation protocol validity. Educational Assessment, 17(2–3), 62–87. https://doi.org/10.1080/10627197.2012.715014. Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.1080/01621459.1976.10480949. City, E. A., Elmore, R. F., Fiarman, S. E., & Teitel, L. (2011). Instructional rounds in education. A network approach to improving teaching and learning. Cambridge, MA.: Harvard Education Press. Collie, R. J., & Martin, A. J. (2016). Adaptability: An important capacity for effective teachers. Educational Practice and Theory, 38(1), 27–39. Conley, S., Smith, J. L., Collinson, V., & Palazuelos, A. (2016). A small step into the complexity of teacher evaluation as professional development. Professional Development in Education, 42(1), 168–170. https://doi.org/10.1080/19415257.2014.923926. Curry School of Education University of Virginia. (2018). My teaching partner. Retrieved from https://curry.virginia.edu/myteachingpartner. Danielson, C. (1996). Enhancing professional practice: A framework for teaching. Alexandria, VA: Association of Supervision and Curriculum Development. Danielson, C. (2011). Evaluations that help teachers learn. Educational Leadership, 68(4), 35–39. Danielson, C. (2013). The framework for teacher evaluation instrument (2013th ed.). Princeton, NJ: The Danielson Group. Danielson, C. (2016). Charlotte Danielson on rethinking teacher evaluation. Retrieved from Bethseda, MD: Charlotte Danielson on Rethinking Teacher Evaluation. Derrington, M. L., & Kirk, J. (2016). Linking job-embedded professional development and mandated teacher evaluation: teacher as learner. Professional Development in Education, 1–15. https:// doi.org/10.1080/19415257.2016.1231707. Fried, E. I. (2017). What are psychological constructs? On the nature and statistical modelling of emotions, intelligence, personality traits and mental disorders. Health Psychology Review, 11(2), 130–134. https://doi.org/10.1080/17437199.2017.1306718. Gill, B., Shoji, M., Coen, T., & Place, K. (2016). The content, predictive power, and potential bias in five widely used teacher observation instruments (REL 2017-191). Washington, DC: U.S. Retrieved from http://ies.ed.gov/ncee/edlabs. Halpin, P. F., & Kieffer, M. J. (2015). Describing profiles of instructional practice. Educational Researcher, 44(5), 263–277. https://doi.org/10.3102/0013189x15590804. Hamre, B. K., Pianta, R. C., Burchinal, M., & Downer, J. T. (2010). A course on supporting early language and literacy development through effective teacher-child interactions: Effects on teachers’ beliefs, knowledge and practice. Paper presented at the Society for Research on Educational Effectiveness, Washington, DC. Harris, D. N. (2012). How do value-added indicators compare to other measures of teacher effectiveness. Carnegie Knowledge Network Brief (5). Hattie, J. (2003). Teachers make a difference. What is the research evidence. Distinguishing Expert Teachers from Novice and Experienced Teachers. Retrieved from Melbourne. Hattie, J. (2012). Visible learning for teachers. Maximising impact on learning. London: Routledge. Kane, M. T., & Staiger, D. O. (2012). Gathering feedback for teaching. Combining high-quality observations with student surveys and achievement gains. Retrieved from Seattle, WA: http:// eric.ed.gov/?id=ED540960. Mashburn, A., Meyer, J., Allen, J., & Pianta, R. (2014). The effect of observation length and presentation order on the reliability and validity of an observational measure of teaching quality. Educational and Psychological Measurement, 74(3), 400–422. https://doi.org/10.1177/ 0013164413515882. MET Project. (2013). Ensuring fair and reliable measures of effective teaching: Culminating findings from the MET project’s three-year study—Policy and practitioner brief . Seattle, WA: Bill & Melinda Gates Foundation.

42

3 Classroom Observation as Method for Research and Improvement

O’Leary, M., & Wood, P. (2016). Performance over professional learning and the complexity puzzle: Lesson observation in England’s further education sector. Professional Development in Education, 1–19. https://doi.org/10.1080/19415257.2016.1210665. Pianta, R. (2011). Teaching children well. New evidence-based approaches to teacher professional development and training. Retrieved from www.americanprogress.org. Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher, 38(2), 109–119. https://doi.org/10.3102/0013189x09332374. Pianta, R. C., Hamre, B. K., & Mintz, S. (2012). Classroom assessment scoring system: Secondary manual. Curry School of Education University of Virginia: Teachstone. Plake, B. S., & Wise, L. L. (2014). What is the role and importance of the revised AERA, APA, NCME standards for educational and psychological testing? Educational Measurement: Issues and Practice, 33(4), 4–12. https://doi.org/10.1111/emip.12045. Popper, K. (1959). The logic of scientific discovery. New York: Basic Books. Stuhlman, M., Hamre, B., Downer, J., & Pianta, R. C. (2014). How to select the right classroom observation tool. Retrieved from http://curry.virginia.edu/uploads/resourceLibrary/ CASTL_practioner_Part3_single.pdf. The Danielson Group. (2013). The framework. Retrieved from http://www.danielsongroup.org/ framework/.

Chapter 4

The Relationship of Teacher Adaptability, Self-efficacy and Autonomy to Their Adaptive Practices

Abstract Adaptability is an important disposition for teachers as response to change, novelty and uncertainty is central to their daily work. Teacher adaptability is an emerging construct in research on teacher classroom behaviours with evidence of correlation to improved outcomes for both teachers and students. Teacher adaptive practices were conceptualised in this study as the classroom behavioural expression of teacher adaptability. The data from 278 classroom observations of 71 teachers were analysed for its relationship to the teacher self-report constructs of teacher adaptability, teacher self-efficacy and perceived autonomy support. The study found that only teacher adaptability could predict a sub-scale of adaptive practices that potentially promote student critical and creative thinking. This finding signals an important relationship between teacher adaptability and adaptive teaching given that student critical and creative thinking is a valued outcome of schooling. Keywords Determinants of Teacher Adaptability · Teacher Self-Efficacy · Adaptive Teaching

Introduction The first three chapters of this book presented the arguments for why teacher adaptive practices are important, critically reviewed their intellectual origins and explicated the methodological foundations of this study. This chapter presents the findings of the main study where data generated from a teacher adaptive practice classroom observation instrument were tested for their relationship with the established teacher research constructs of teacher adaptability, teacher self-efficacy and perceived autonomy support. Adaptability is an important disposition for teachers as response to change, novelty and uncertainty is central to their daily work. Teacher adaptability is an emerging construct in research on teacher classroom behaviours with evidence of correlation to improved outcomes for both teachers and students (Collie & Martin, 2016, 2017). This emergence was recognised in the introduction to the 2016 AERA Handbook on Research on Teaching which acknowledged “that teaching as an interpretive, situated © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_4

43

44

4 The Relationship of Teacher Adaptability, Self-efficacy and …

act requires adaptability and judgment” (Gitomer & Bell, 2016, p. 9). Despite such impressive early credentials, teacher adaptability to date has only been measured by self-report scales (Collie & Martin, 2016, 2017). Existing research on teacher adaptability defines three methods by which it might be assessed. These are surveys, interview and focus group questions and classroom observation. The first two are self-report measures whilst classroom observation is an external measure. Classroom observation of teachers, therefore, could potentially make a contribution to the evidence linking teacher adaptability with adaptive teaching. There is already a well-established literature on the similar construct of adaptive teaching (Parsons & Vaughn, 2016), but a validated classroom observation instrument has not been published to date. Instead, researchers in this area have produced codes that signify teacher adaptive responses in the classroom. Their coding schema has seven codes for teacher adaptations: (1) introduces new content; (2) inserts a new activity; (3) omits a planned activity; (4) provides a resource or example; (5) models a skill or inserts a mini-lesson; (6) suggests a different perspective to students; and (7) pulls a small group, conducts an individual conference, or changes grouping structure (Vaughn, Parsons, Burrowbridge, Weesner, & Taylor, 2016, p. 261). All these adaptive teacher behaviours occur in response to the stimuli of student learning, motivation and behaviours (Parsons et al., 2017). This study is informed by the concept of triadic reciprocal causation from social cognition theory that describes the reciprocal interaction between a person and their environment through their cognition, affect and behaviours (Bandura, 1997). The value of this theory for this study lies in its recognition of the interaction of multiple determinants of adaptive teaching that are expressed in a teacher’s motivation, disposition and behaviours. In this study, a teacher’s sense of their adaptability and self-efficacy are the personal determinants whilst their perception of their autonomy support is an environmental determinant. Teacher adaptive practices are the potential behavioural determinants of adaptive teaching in classrooms characterised by change, novelty and uncertainty. This model proposes that a teacher who is adaptable and feels a sense of autonomy and self-efficacy at school will be adaptive in the classroom creating the possibility for critical and creative thinking in their students. The research questions for this study are: 1. 2. 3. 4.

Does teacher self-efficacy predict teacher adaptive practices? Does perceived autonomy support predict teacher adaptive practices? Does teacher adaptability predict teacher adaptive practices? What teacher covariates predict these four constructs? The research hypotheses are:

• Teacher self-efficacy will predict teacher adaptability; and • Teacher self-efficacy, perceived autonomy support and teacher adaptability will predict teacher adaptive practices as either a global scale, as sub-scales of items or as individual items. TAP items will cluster in sub-scales based on formative

Introduction

45

assessment (items 1–4, 14), creativity (items 5–8, 11), and teacher synthesis of student ideas (items 9–10) (see Appendix 4 for items).

Methodology This study adopted an integrated argument approach to the validation of a teacher adaptive practice scale. The methods used focused on gathering validation evidence on the nascent 14 item teacher adaptive practice scale. This study collected validation evidence on the test content, the observation protocols, their intended and unintended consequences, the internal structure of the scale and the relationship of the data with other outcomes of interest (AERA, APA, & NCME, 2014). The integrated argument approach to validity focuses more on what inferences can be made from an instrument rather than providing warrants for claims that the instrument has construct, content or other types of validity. The test standards (2014) refer to validity evidence rather than types of validity. This conception of validity draws attention to the quality of the evidence produced: “a few lines of solid evidence regarding a particular proposition are better than numerous lines of evidence of questionable quality” (AERA, 1999, p. 11). The proposition for the teacher adaptive practice scale is that teachers who score highly on the scale can respond and adapt to the immediate learning needs of the students in their class creating the necessary conditions for critical and creative thinking.

Methods This section details the sampling, procedures, measures and data analysis used in the administration of this research study. These methods are a critical part of the validation evidence reported in this study.

Sample and Procedures The sample for this study was 278 classroom observations of 71 individual teachers (see Table 4.1). The sample is approximately representative of the Australian teaching workforce (numbers were available in brackets in Table 4.1 sourced from (Willett, Segal, & Walford, 2014). The exception to this representative sample is with teacher experience where this study was skewed towards less experienced teachers (numbers in brackets in Table 4.1 sourced from McKenzie, Weldon, Rowley, Murphy, & McMillan, 2014). This was not considered a threat to the validity of the study as the covariate of teacher experience has played only a minor role in previous research on teacher adaptability (Collie & Martin, 2017).

46 Table 4.1 Sample characteristics n = 71

4 The Relationship of Teacher Adaptability, Self-efficacy and …

Characteristic

Percentage

Sector

Government—60.8 (76%) Catholic—28.1 (14.8%) Independent—11.2 (9.5%)

Gender

Male—26.6 (25.9%) Female—72.4 (74.1%)

Qualification

Bachelor—63.4 (84%) Master—31.9 (10%) Ph.D.—4.7

Teaching experience

TES—1.5 1–5 years—34.2 (17.5%) 6–10 years—28.7 (18.3%) 15 years plus—12.1 (50.1%)

School type

Boys—20.8 Girls—17.1 Coed—62.1

Ethics approval was received from the University of New South Wales Human Ethics Advisory Panel. Approval was also received from principals of each participating school, and informed consent was obtained from each teacher in the sample. The informed consent process involved a request to school principals to approve and recommend the observation of expert teachers in this school. We did not specify the criteria for the rating of expert as (1) we did not wish to place onerous conditions on busy people; (2) we had sufficient faith in the ability of school principals to identify expertise in their schools; and (3) we acknowledge that the identification of an expert teacher is contested (Berliner, 1986; Sorensen, 2016). The principals, being the busy executive managers that they are, delegated the responsibility of identifying expert teachers to heads of department or their equivalent. The identification of the expert teacher therefore was ultimately undertaken by a delegate of the principal much closer to the classroom than the head office. This delegate commonly asked the following two questions: “Who do you want to see? What do you want to see?” To which we replied that we would like to observe the teacher who they regarded to be an expert teacher in their subject. The protocols for the administration of the instruments in the study firstly involved a meeting with the teacher who was to be observed. In this meeting, information about this study was presented in both written and oral forms and informed consent was sought from the participant. The teacher questionnaire was administered once this consent was granted. The teacher was then observed for a whole lesson or period that varied from 45 to 80 min duration. Each observation conducted in the lesson was 15 min of description followed by a five-minute coding/analysis interval. This 15–5 description/analysis cycle is a variation on the 15–10 cycle used in the CLASS-S scale protocols (Pianta, Hamre, & Mintz, 2012).

Methods

47

Measures Teachers’ Perceived Autonomy Support This study made a slight modification to Klassen, Perry, and Frenzel (2012) adapted version of the six-item short form of the Work Climate Questionnaire (Baard, Deci, & Ryan, 2004) to measure teachers’ perceptions of their mentor’s autonomy support (e.g., “my mentor encourages me to ask questions”). The word mentor was used instead of principal due to the possibility that in large secondary schools with staff numbers of more than one hundred, a teacher may rarely be in contact with their principal. A researcher was present when the teacher completed the questionnaire and was able to answer queries like “I don’t have a mentor”. In that case, the researcher would suggest that they use their faculty head teacher instead. The items were scored from 1 (strongly disagree) to 7 (strongly agree). There is published evidence of the reliability and validity for the use of this scale with teachers (Collie, Shapka, Perry, Martin, 2016; Klassen et al., 2012).

Teachers’ Sense of Efficacy A modified six-item scale was used to measure teacher’s sense of self-efficacy (e.g. how confident are you that you can calm a student who is disruptive or noisy). The smaller six-item scale was used to lessen the response burden on participants, particularly since the questionnaire preceded a one-hour observation of their classroom teaching. The items were scored from 1 (not at all) to 9 (extremely). This instrument has been shown to have adequate reliability and validity in previous studies (Durksen, Klassen, & Daniels, 2017; Klassen et al., 2009).

Teachers’ Sense of Adaptability The nine-item teacher adaptability scale was used that measures a teacher’s sense of cognitive adaptability (e.g. I am able to adjust my thinking or expectations in the classroom to assist me in a new situation if necessary), behavioural adaptability (to assist me in a new situation that arises in the classroom, I am able to change the way I do things if necessary) and affective adaptability (when uncertainty arises in the classroom, I am able to minimise frustration or irritation so I can deal with it best). The items were scored from 1 (strongly disagree) to 7 (strongly agree). The scale was considered as a single factor in the analyses consistent with published psychometric evidence (Martin, Nejad, Colmar, & Liem, 2012; Martin, Nejad, Colmar, & Liem, 2013). Other researches with different populations have reported evidence of the reliability and validity of the scale (Martin et al., 2012; Martin, Nejad, Colmar, Liem, & Collie, 2015; Martin et al., 2013).

48

4 The Relationship of Teacher Adaptability, Self-efficacy and …

Teacher Adaptive Practices The classroom observation protocol for this study was modelled on the valid and reliable CLASS-S protocols (Robert C Pianta et al., 2012). A 14-item teacher adaptive practice scale was used to code 15-min segments of teacher’s classroom practice (Loughland & Vlies, 2016). Descriptive notes were taken throughout the 15-min observations and coding occurred in the five minutes immediately following the observation. Each item was scored from 1 (low) to 5 (high) using the observation guide (see Appendix 4). The author coded 81.3% of the data whilst trained research assistants did the remaining 18.7%. Inter-rater reliability was achieved through careful training of the other raters whereby the main researcher worked alongside the novices on their first day and modelled the protocols for them. The classroom observation protocol for this study was modelled on the valid and reliable CLASS-S protocols (Robert C Pianta et al., 2012). A 14-item teacher adaptive practice scale was used to code 15-min segments of teacher’s classroom practice (Loughland & Vlies, 2016). Descriptive notes were taken throughout the 15-min observations and coding occurred in the five minutes immediately following the observation. Each item was scored from 1 (low) to 5 (high) using the observation guide (see Appendix 4). The author coded 81.3% of the data whilst trained research assistants did the remaining 18.7%. Inter-rater reliability was achieved through careful training of the other raters whereby the main researcher worked alongside the novices on their first day and modelled the protocols for them. Empirical evidence of the TAP scale was established using exploratory factor analysis. This was to ensure that the indicators were contributing to the measurement of TAP. Data cleaning for possible outliers was conducted prior to determining the factorability of the items. Three criteria support the factorability of TAP: (1) positive correlations (0.30–0.83; p < 0.001) were observed among 15 items; (2) the Kaiser–Meyer–Olkin measure of sampling adequacy was above the threshold value of 0.6 (0.86), and Bartlett’s test of sphericity was significant (X2 = 3897.84; df = 120; p < 0.001); and (3) the communalities range between 0.54 to 0.93, which are all above 0.30. Exploratory factor analysis using maximum likelihood and direct oblimin rotation initially extracted three factors. Cumulative eigenvalues indicated that the threefactor model accounts for 68.04% of the variance observed in the data set. However, closer examination of the results revealed that there are 12 items with cross-loadings with other factors greater than 0.30. Each of these items was subsequently deleted, and EFA was run after each deletion. In the final EFA, there is only one factor extracted with three items with factor loadings ranging from 0.75 to 0.85. This one-factor model was used in the subsequent analysis.

Covariates Five teacher covariates were examined: school sector (independent, Catholic or government), gender, educational qualification, experience and school type (boys, girls

Methods

49

or coeducational). School sector codes were 1 for government, 2 for Catholic and 3 for independent. Gender was coded as 1 for female and 2 for male. Educational qualification was coded as 1 for bachelor’s degree, 2 for post-graduate degree and 3 for PhD. Experience was coded as 1 for pre-service teacher, 2 for 1–5 years, 3 for 6–10 years, 4 for 11–15 years and 5 for 15 plus years. The codes for school type were 1 for boys, 2 for girls and 3 for coeducational.

Data Analysis A confirmatory factor analysis (CFA) was performed on the four constructs. The full information maximum likelihood was used for missing data imputation. Fit indexes were used to determine the best-fitting model including a non-significant chi-square value or less than 3:1 ratio between X 2 /df (Kline, 2010) comparative fit index (CFI), Tucker–Lewis index (TLI) greater than 0.90, and root mean square error of approximation (RMSEA) and standardised root mean square residual (SRMR) below 0.05. Items 9–14 did not load significantly to the model and were excluded from the final analysis. The exclusion of these items addresses the limitations between parameter estimates and sample size. For TA, only six items were included out of the nine original items with Cronbach’s alpha = 1, which is consistent with reported values in other studies (Collie & Martin, 2017); for TSE, only three items were loaded significantly with Cronbach’s alpha = 0.75, which is an adequate level given the small number of indicators; for PAS, four items were included with Cronbach’s alpha = 0.96, which is slightly higher than reported levels (Klassen et al., 2012); and for TAP, three items were loaded significantly with Cronbach’s alpha = 0.86. Structural equation modelling (SEM) was conducted to test the model that TAP can be predicted by TA and to see if TA is associated with TSE and PAS. The indirect effects of TSE and PAS to TAP via TA were tested using nonparametric bootstrapping technique to explore if TA links PAS and TSE to TAP. Further, five teacher covariates were included in the analysis as predictors of the four constructs.

Results Preliminary analyses revealed that the four constructs had adequate levels of Cronbach’s alpha ranging from 0.75 to 0.96 with factor loadings between 0.67 and 0.98 (see Fig. 4.1). The measures of latent correlations in Table 4.2 show that teacher adaptive practices are associated with TA and TSE. Also, gender and school type are associated with TAP. TA is associated with PAS, TSE and teaching experience. TSE is negatively associated with educational qualifications. Lastly, PAS is negatively associated with both gender and sector.

50

4 The Relationship of Teacher Adaptability, Self-efficacy and …

Teacher Adaptability

Student creative and critical thinking

Teacher Adaptive Practices

Teacher Self Efficacy

Perceived Autonomy Support

Fig. 4.1 Proposed model under examination Table 4.2 Latent correlations among covariates and other constructs Gender

Educational Teaching qual. exp.

Sector

School type

PAS

TSE

TA

Teacher covariates Gender (F/M) Educational qualification

−0.07

Teaching experience

−0.09

−0.02

0.03

−0.07

Sector School type

−0.17**

0.25**

−0.04 −0.10

−0.35**

0.09

−0.17**

Self-report teacher variables PAS

−0.18**

TSE

−0.08

−0.27***

0.23*** −0.05

0.01

TA Observed teacher behaviours

−0.32

−0.03

0.13*

0.02

0.40*** 0.38**

TAP

−0.19**

−0.05

0.20***

0.03

0.11

−0.05

0.07

−0.03

0.09 −0.03

0.15*

0.22***

*p < 0.05**p < 0.01***p < 0.001

Multi-levelling Modelling with Teacher Constructs The results of the SEM are summarised in Table 4.3 and shown in Fig. 4.2. The model is supported by the chi-square ratio equal to 1.85, RMSEA = 0.11 with CI 0.10–0.12, SRMR = 0.04; CFI = 0.95 and TLI 0.94. All these indexes are within the recommended threshold values for a good-fitting model (Tabachnick & Fidell, 2007).

Results

51

Table 4.3 Standardised beta coefficients from structural equation modelling PAS

TSE

TA

TAP

Teacher covariates Gender (F/M)

−0.18*

−0.07

−0.22**

−0.15*

Educational qualification

0.11

−0.28**

0.04

−0.02

Teaching experience

0.01

0.23**

0.18*

−0.09

Sector

−0.16*

0.03

0.18*

−0.05

School type

0.09

0.00

−0.02

0.21**

−0.03

0.41**

−0.05

0.40**

0.08

Teacher variables PAS TSE TA

0.30*

PAS = perceived autonomy support; TSE = teacher self-efficacy; TA = teacher adaptability; TAP = teacher adaptability practice *p < 0.05 **p < 0.001 0.05

0.05

0.05

tse2

tse3

tse1 0.76

0.67

0.68

TSE 0.02

0.01

pas1

pas2

0.86

0.96 0.98

0.01

0.01

pas3

pas4

0.39

PAS

0.92

0.93

0.37

0.92

0.30

TA 0.88

0.69

tap1

0.03

tap2

0.03

tap3

0.04

0.85

0.70

TAP 0.71

ta1

ta2

ta3

ta4

ta5

ta6

0.01

0.01

0.02

0.03

0.03

0.03

0.84 0.76

Fig. 4.2 Structural equation model

The standardised beta paths show that causal relationships exist between PAS and TA (β = 0.41, p < 0.001) and between TSE and TA (β = 0.40, p < 0.001). In addition, there is a causal relationship between TA and TAP (β = 0.30, p < 0.05). There are three items in the TAP that are predicted by TA. These are: • TAP Item 5: The teacher prompted students to discover key concepts through responsive open-ended questions • TAP Item 6: The teacher prompted students to express their thinking and used this as a springboard for learning activities • TAP Item 8: The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty.

52 Table 4.4 Results of correlation analysis of TAP clusters

4 The Relationship of Teacher Adaptability, Self-efficacy and …

TAPA

TAPB

TAPA TAPB

0.39*

TAPC

0.50*

0.60*

*Significant at 0.001

There were no positive associations observed in the data for the alternative model that hypothesised the indirect effects of PAS and TSE to TAP. In terms of the ability of covariates to predict the constructs, males have lower PAS (β = −0.18, p = 0.004), TA (β = −0.22, p < 0.001) and TAP (β = −0.15, p = 0.03). Those with Ph.D. have lower TSE (β = 0.28, p < 0.001). Teachers who have longer teaching experience have higher TSE (β = −0.23, p = 0.001). Teachers who are working in independent schools have lower PAS (β = −0.16, p = 0.009) but with higher TA (β = 0.18, p = 0.005). Lastly, teachers in coeducational schools have higher TAP (β = 0.21, p = 0.003). Exploratory and confirmatory factor analyses were conducted on the 14-item scale, and three clusters were identified: • TAP A—items 5, 6, 8 • TAP B—items 2 and 3 • TAP C—items 1 and 4 A correlation analysis was conducted to further examine hypothesis three of the study that sub-scales of the 14-item TAP scale would be identified. The results of this analysis are in Table 4.4. There is a strong correlation between TAP B and TAP C factors which both refer to formative assessment practices. TAP A has moderate a correlation with both TAP B and TAP C.

Discussion The aim of the current study was to measure the relationships between the personal, environmental and behavioural factors that were hypothesised to lead to adaptive teaching. The results revealed that teacher’s perceived autonomy and self-efficacy did not predict teacher adaptive practice behaviours as they were defined in this study. However, teacher adaptability did predict a sub-scale of what is labelled as teaching behaviours that promote student critical and creative thinking.

Discussion

53

Links Between Teacher Adaptive Practices and Perceived Autonomy Support, Teacher Self-efficacy and Teacher Adaptability This study confirmed the null hypothesis that perceived autonomy support and teacher self-efficacy do not predict teacher adaptive practices and confirmed the hypothesis that teacher adaptability predicts teacher adaptive practices, albeit a smaller sub-scale of them. This study did find evidence of a causal link between perceived autonomy support and teacher adaptability. This confirms the finding from previous research that a teacher’s perception that they have the support of their supervisor contributes to their sense of adaptability (Collie & Martin, 2017). A new finding from this study is that teacher self-efficacy predicts teacher adaptability. This finding adds to the positive association of teacher adaptability with other personal characteristics associated with positive outcomes for teachers such as wellbeing and organisational commitment (Collie & Martin, 2017). Teacher self-efficacy, however, is another self-report measure, so it does not add to the type of external evidence such as student achievement in numeracy reported by Collie and Martin (2017) that is required to make any claims about a relationship to outcomes of interest such as teacher effectiveness (Klassen & Tze, 2014). The most significant finding from this study concerns hypothesis two where teacher adaptability predicts a smaller sub-scale of adaptive practices that have the potential to promote student critical and creative thinking. The original hypothesis stated that items 5–8 and 11 would be a factor, but only items 5, 6 and 8 worked as a sub-scale. Student critical and creative thinking is a highly prized, albeit elusive, goal of education systems throughout the world as evidenced by OECD sponsored work in this area (Lucas, Claxton, & Spencer, 2013), the targets of leading education systems in East and South Asia (Zhao, 2012) and in the state of Victoria in Australia (Victoria State Government, 2018). Another sub-scale was hypothesised that focused on formative assessment through items 1–4 and 14. The results showed two factors on formative assessment. TAP B includes items 2 and 3 and demonstrates a strong correlation with TAP C that includes items 1 and 4. The occurrence of item 14, “the teacher creates groups of students based on formative assessment”, was so rarely observed in practice that it is not surprising that it is not included here. Finally, the null hypothesis was confirmed for a sub-scale on teacher synthesis with items 9–10.

Covariate Effects The only significant effects of the covariates in this study were gender and school type. It is fortunate that females were more likely to demonstrate adaptive practice as they comprise 75% of the teaching workforce in Australia. The finding that teachers

54

4 The Relationship of Teacher Adaptability, Self-efficacy and …

from independent schools are more adaptive must be treated with caution due to the small number of schools in this study. This finding warrants further investigation in future studies.

Limitations and Future Directions The limitations of this study relate to the challenges in using classroom observation as a method of generating evidence that links teacher adaptability to adaptive teaching. The challenges of establishing adequate reliability with classroom observation measures are well known (Harris, 2012; MET Project, 2013), but there are also issues of validity and cost-effectiveness. This study employed the nascent teacher adaptive practice instrument that had peer-reviewed evidence of content validity (Loughland & Vlies, 2016), but the evidence in this study does not support the scale as a valid measure of adaptive teaching apart from a small sub-scale that have the potential of promoting student critical and creative thinking. It may be more valid and cost-effective to measure student achievement than teaching behaviours if the goal of a research programme is to evaluate teacher effectiveness. The validity gap between the external measures of evaluative teaching frameworks and student achievement has been recognised by Klaasens and Tze (2014). Indeed, the extensive Measures of Effective Teaching research programme in the USA concluded that teacher effectiveness should be measured through a combination of student achievement, teacher observation and student evaluation of their teachers (Kane & Staiger, 2012). However, the goal of this research study was to generate a teacher improvement measure (Bryk, Gomez, Grunow, & LeMahieu, 2015) for adaptive teaching. The potential contribution of the measure to teacher professional learning offsets the time cost of conducting the classroom observation required. Both the FFT (Danielson, 2012) and the CLASS instruments (Pianta & Hamre, 2009) have taken this route as they are both linked to commercial professional learning programmes (Curry School of Education, University of Virginia, 2018; Danielson, 2013). The challenges and resources involved in classroom observation research means that route should only be taken if the purpose is to produce a valid and reliable classroom observation instrument that can also be used for teacher professional learning. For example, a future direction in this research may be to develop an observation scale for use in teacher professional learning on student critical and creative thinking that builds upon the promising findings of this study. If future research is to examine the link between the teacher adaptability and outcomes of interest, then the most valid and expedient route might be to measure student critical and creative thinking as an external measure of adaptive teaching. There is a growing consensus that student critical and creative thinking is a valid and desirable outcome of schooling systems (Victoria State Government, 2018; Zhao, 2012), and there is a growing literature on how it might be measured (Lucas et al., 2013). Therefore, one future direction would be to examine the link between

Discussion

55

the self-report teacher measures of PAS, TSE and TA with an external measure of student critical and creative thinking. The personal and environmental determinants in the model of adaptive teaching proposed in this study had more significant relationships with each other than with the behavioural determinant of teacher adaptive practice. This finding requires a rethink for future research in adaptive teaching. This may involve using different sources of external measures such as student achievement or student rating data. It may also involve refining the teacher adaptive practice scale and undertaking another round of validation evidence generation.

Conclusion The aim of the current study was to measure the links between the personal determinants of adaptive teaching in teacher adaptability and teacher self-efficacy, the environmental determinant of perceived autonomy support with the proposed behavioural determinant of teacher adaptive practice. Findings provide evidence that teacher adaptability is the only one of the three personal self-report constructs that can predict a smaller sub-scale that have the potential to promote student critical and creative thinking. This is an important finding as it gives impetus to building alternative models of adaptive teacher that have student critical and creative thinking as their key objective. Finally, the relationship between the two personal and one environmental constructs are causal that provides a solid foundation from which to test further models of adaptive teaching that include measures of teacher behaviour.

Appendix 1: Teacher Adaptive Practices Coding Guide

Indicator

Low

High

1

The teacher modifies learning goals in response to formative assessment

Teacher did not undertake any formative assessment

Teacher checks for student understanding and makes changes to the lesson in response

2

The teacher modifies their instructions during the lesson to increase learning opportunities

Instructions given once and in one modality to the whole class

The teacher did an impromptu demonstration to a small group using the classroom globe in response to student questions about international time zones (continued)

56

4 The Relationship of Teacher Adaptability, Self-efficacy and …

(continued) Indicator

Low

High

3

The teacher uses formative assessment to differentiate their responses to individual students

The teacher asks students to move to the true or false side of the room but does not follow up with why questions

Teacher sets Do Now task at the beginning of the lesson, helps students with the task and asks questions about the task when all students have attempted it

4

The teacher negotiates learning activities with students, ensuring these are aligned with learning goals

All students completed the same activity at the same time

The teacher used students’ misconceptions as a guide to the learning activity that was chosen

5

The teacher prompted students to discover key concepts through responsive open-ended questions

Teacher used shallow questions that did not require deep conceptual responses from the students

“Why is it expensive to make things in Australia?” “How has technology changed religion?” “In which direction does the water flow into the drain in the Northern and Southern Hemisphere?”

6

The teacher prompted students to express their thinking and used this as a springboard for learning activities

The teacher used “guess what is in my head” questions; “It starts with…?”

The teacher asked the students to annotate their notes with an “E” if they required more evidence

7

The teacher uses a thinking routine to prompt deeper exploration of concepts or skills

“The steps I would like you to take are decode, position, read the poem, write your response”

Teacher used a “See, Think, Wonder” to prompt students to think metaphorically on a concept

8

The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty

Teacher answered big science questions directly instead of asking them why

The teacher explored the different definitions of a concept evident across different sources to demonstrate the contested and uncertain nature of it

9

The teacher provided a synthesis of class generated ideas

Teacher uses initiate, response, evaluate to individual student answers

“I feel if we joined these last three responses, we should have a good answer on identity”

10

The teacher links, when appropriate, lesson concepts to larger disciplinary ideas

Teacher talk focused on the execution of the learning activity rather than the underlying big idea

The teacher linked the preservation of vegetables by bottling to the chemical processes (continued)

Appendix 1: Teacher Adaptive Practices Coding Guide

57

(continued) Indicator

Low

High

11

The teacher provided analogies and metaphors to increase learning opportunities

Teacher does not use analogy and metaphor when the opportunity arises

The teacher used an image of a waterfall to assist student understanding of the life cycle of a business The teacher roleplayed a character in the text to expand understanding

12

The teacher demonstrated flexible pacing of lesson in response to student learning needs

Teacher adheres to their script without checking-in with students to see if they understood the concept

The duration of each learning activity is contingent on student understanding

13

The teacher demonstrated responsive use of literacy/numeracy interventions

No dynamic literacy/numeracy interventions evident

Teacher identified the word “essential” as expressing high modality Teacher used a think-aloud process to identity story retelling in literary analysis as a practice to be avoided

14

The teacher creates groups of students based on formative assessment

Students not grouped or are in previously assigned table groups

Students moved into groups based on a self-rating of their knowledge

References AERA, APA, & NCME. (1999). Standards for educational and psychological testing. Washington: AERA. AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington D.C.: AERA. Baard, P. P., Deci, E. L., & Ryan, R. M. (2004). Intrinsic need satisfaction: A motivational basis of performance and weil-being in two work settings. Journal of Applied Social Psychology, 34(10), 2045–2068. https://doi.org/10.1111/j.1559-1816.2004.tb02690.x. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Berliner, D. C. (1986). In pursuit of the expert pedagogue. Educational Researcher, 15(7), 5–13. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Graduate Education School: Harvard Education Press. Collie, R. J., & Martin, A. J. (2016). Adaptability: An important capacity for effective teachers. Educational Practice and Theory, 38(1), 27–39. Collie, R. J., & Martin, A. J. (2017). Teachers’ sense of adaptability: Examining links with perceived autonomy support, teachers’ psychological functioning, and students’ numeracy achievement. Learning and Individual Differences, 55, 29–39. https://doi.org/10.1016/j.lindif.2017.03.003. Collie, R. J., Shapka, J. D., Perry, N. E., & Martin, A. J. (2016). Teachers’ psychological functioning in the workplace: Exploring the roles of contextual beliefs, need satisfaction, and personal characteristics. Journal of Educational Psychology, 108(6), 788–799. https://doi.org/10.1037/ edu0000088.

58

4 The Relationship of Teacher Adaptability, Self-efficacy and …

Curry School of Education University of Virginia. (2018). My teaching partner. Retrieved from https://curry.virginia.edu/myteachingpartner. Danielson, C. (2012). Observing classroom practice. Educational Leadership, 70(3), 32–37. Danielson, C. (2013). The framework for teacher evaluation instrument (2013th ed.). Princeton, NJ: The Danileson Group. Durksen, T. L., Klassen, R. M., & Daniels, L. M. (2017). Motivation and collaboration: The keys to a developmental framework for teachers’ professional learning. Teaching and Teacher Education, 67, 53–66. https://doi.org/10.1016/j.tate.2017.05.011. Gitomer, D. H., & Bell, C. A. (2016). Introduction. In D. H. Gitomer & C. A. Bell (Eds.), Handbook of research on teaching (5th ed.). Washington, D.C.: AERA. Harris, D. N. (2012). How do value-added indicators compare to other measures of teacher effectiveness. Carnegie Knowledge Network Brief, (5). Kane, M. T., & Staiger, D. O. (2012). Gathering feedback for teaching. Combining high-quality observations with student surveys and achievement gains. Retrieved from Seattle, WA: http:// eric.ed.gov/?id=ED540960. Klassen, R. M., Bong, M., Usher, E. L., Chong, W. H., Huan, V. S., Wong, I. Y. F., et al. (2009). Exploring the validity of a teachers’ self-efficacy scale in five countries. Contemporary Educational Psychology, 34(1), 67–76. https://doi.org/10.1016/j.cedpsych.2008.08.001. Klassen, R. M., Perry, N. E., & Frenzel, A. C. (2012). Teachers’ relatedness with students: An underemphasized component of teachers’ basic psychological needs. Journal of Educational Psychology, 104(1), 150–165. https://doi.org/10.1037/a0026253. Klassen, R. M., & Tze, V. M. C. (2014). Teachers’ self-efficacy, personality, and teaching effectiveness: A meta-analysis. Educational Research Review, 12, 59–76. https://doi.org/10.1016/j. edurev.2014.06.001. Kline, R. B. (2010). Principles and practices of structural equation modeling (3rd ed.). New York, NY: Guilford Press. Loughland, T., & Vlies, P. (2016). The validation of a classroom observation instrument based on the construct of teacher adaptive practice. The Educational and Developmental Psychologist, 33(2), 163–177. https://doi.org/10.1017/edp.2016.18. Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in schools: First steps towards new forms of formative assessments. OECD: OECD Publishing. https://doi.org/10. 1787/5k4dp59msdwk-en. Martin, A. J., Nejad, H., Colmar, S., & Liem, G. A. D. (2012). Adaptability: Conceptual and empirical perspectives on responses to change, novelty and uncertainty. Journal of Psychologists and Counsellors in Schools, 22(01), 58–81. https://doi.org/10.1017/jgc.2012.8. Martin, A. J., Nejad, H., Colmar, S., Liem, G. A. D., & Collie, R. J. (2015). The role of adaptability in promoting control and reducing failure dynamics: A mediation model. Learning and Individual Differences, 38, 36–43. http://dx.doi.org/10.1016/j.lindif.2015.02.004. Martin, A. J., Nejad, H. G., Colmar, S., & Liem, G. A. D. (2013). Adaptability: How students’ responses to uncertainty and novelty predict their academic and non-academic outcomes. Journal of Educational Psychology, 105(3), 728–746. https://doi.org/10.1037/a0032794. McKenzie, P., Weldon, P., Rowley, G., Murphy, M., & McMillan, J. (2014). Staff in Australian Schools 2013: Main report on the survey. Retrieved from https://docs.education.gov.au/system/ files/doc/other/sias_2013_main_report.pdf. MET Project. (2013). Ensuring fair and reliable measures of effective teaching: Culminating findings from the MET project’s three-year study—Policy and practitioner brief. Seattle, WA: Bill & Melinda Gates Foundation. Parsons, S. A., & Vaughn, M. (2016). Toward adaptability: Where to from here? Theory Into Practice, 55(3), 267–274. https://doi.org/10.1080/00405841.2016.1173998. Parsons, S. A., Vaughn, M., Scales, R. Q., Gallagher, M. A., Parsons, A. W., Davis, S. G., …, Allen, M. (2017). Teachers’ instructional adaptations: A research synthesis. Review of Educational Research, 0(0), 0034654317743198. https://doi.org/10.3102/0034654317743198.

References

59

Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized Observation can leverage capacity. Educational Researcher, 38(2), 109–119. https://doi.org/10.3102/0013189x09332374. Pianta, R. C., Hamre, B. K., & Mintz, S. (2012). Classroom assessment scoring system: Secondary manual. Curry School of Education University of Virginia: Teachstone. Sorensen, N. (2016). Improvisation and teacher expertise: implications for the professional development of outstanding teachers. Professional Development in Education, 1–17. https://doi.org/ 10.1080/19415257.2015.1127854. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. Boston: Pearson Education Inc. Vaughn, M., Parsons, S. A., Burrowbridge, S. C., Weesner, J., & Taylor, L. (2016). In their own words: teachers’ reflections on adaptability. Theory Into Practice, 259–266. https://doi.org/10. 1080/00405841.2016.1173993. Victoria State Government. (2018). Education state ambition: Learning for life. Melbourne: Victoria Government. Retrieved from http://www.education.vic.gov.au/Documents/about/educationstate/ EducationState_LearningForLife.pdf. Willett, M., Segal, D., & Walford, W. (2014). National teaching workforce dataset data analysis report 2014. Retrieved from https://docs.education.gov.au/system/files/doc/other/ntwd_data_ analysis_report.pdf. Zhao, Y. (2012). World class learners: Educating creative and entrepreneurial students. Corwin Press.

Chapter 5

Teacher Professional Learning Using the Teacher Adaptive Practice Scale

Abstract This chapter proposes a combined classroom observation and learning improvement programme based on the model of adaptive teaching presented in this study. This proposal positions this research programme within the science of learning improvement that values the development of rigorous yet usable measures that have direct application to the improvement of learning conditions for students in classrooms. It does this through a brief review of the principles of effective professional learning before examining what the emerging field of learning implementation and improvement science might add to these principles. It then applies these principles to three proposals for teacher professional learning that employs the teacher adaptive practice scale as a diagnostic and improvement measure. Keywords Learning Improvement · Adaptive Teaching · Classroom Observation

Introduction The teacher adaptive practice scale was conceptualised from the outset as a teacher improvement learning tool. This conceptualisation has been strengthened by the author’s use of the scale in pilot professional learning sessions with school staff. In these sessions, the teacher adaptive practice scale along with the observation guide has been used as artefacts to explore teacher thinking and practices around their use of adaptive teaching. The chapter begins by providing a brief review of the literature on effective professional learning so that the strategies presented in this chapter can be evaluated by the reader against quality criteria. It then moves on to examine the possible contribution of the learning improvement sciences. Finally, it presents professional learning strategies that exemplify the principles set out in this chapter.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_5

61

62

5 Teacher Professional Learning Using the Teacher Adaptive …

Effective Professional Learning There is not a great deal of published evidence linking teaching professional learning to improved student achievement. A best evidence synthesis of teacher professional learning and development conducted in 2008 found only 97 studies that referred to student outcomes in their findings (Timperley, 2008). However, there is an extensive literature on teacher professional development and learning (Cole, 2012) which may be due to the fact that the measurement of the impact of professional learning by both researchers and teachers has only become prevalent in the last decade due to more stringent accountability measures in education systems (Harris & Jones, 2017). This brief review of the literature identifies some of the principles of effective professional learning. These principles are the importance of a programme’s theory of action to its success, the power of collaboration, the integration of theory and practice and the positive relationship between self-evaluation and self-efficacy. The term professional learning is used here instead of professional development to indicate that one of the most important principles is that it involves active teacher involvement in their own learning rather than being the subject of development (Durksen, Klassen, & Daniels, 2017). The effectiveness of teacher professional learning is strongly influenced by its theory of action. A theory of action is evident from how a PL programme expects teachers to enact the ideas they are promoting (Kennedy, 2016). Kennedy (2016) argues in her review of professional learning research that a programme’s theory of action provides more rigorous criteria for evaluation of its effectiveness than its design. She exemplified this through reference to two different PL programmes. The first programme, Reading in Science, included all the exemplary design features as suggested in the literature. It was a sustained programme (45 h) with teacher active engagement, use of videos and student work samples. The second programme, CLASS, spent only 15 h with teachers, predominantly online, yet it was more effective when student outcomes were measured a year later (Kennedy, 2016). The CLASS intervention had a stronger theory of action as it involved teachers closely examining and coding their own practice using a validated instrument. The teachers also had access to an online teacher coach with whom they could moderate their self-assessment. The finding that it is the theory of action in PL programmes that has more impact than programme features is in agreeance with Timperley’s (2008) argument that effective professional learning is centred in teacher practices and is always focused on student learning. A common principle for effective professional learning is that teachers need to collaborate. This principle is supported by the evidence base on teacher collective efficacy. Teacher collective efficacy has an effect size of 1.57, ranked two out of 195 in Hattie’s synthesis of 1200 meta-analyses relating to influences on student achievement (2015). Teacher collective efficacy is also supported by a sound theoretical and empirical base located in educational psychology (Durksen et al., 2017; Goddard, Hoy, & Hoy, 2004; Klassen et al., 2008). Kennedy (2016) cites one of the older and more effective studies in her sample as an example of collaboration not only between

Effective Professional Learning

63

teachers but between teachers and the facilitators of the professional learning. This study is a salutary reminder of the importance of giving teachers respect in professional learning programmes rather than treating them as a problem that needs to be fixed: But another important distinction is that the authors treated the teachers more as colleagues whose role was to help the researchers test this new model of instruction, rather than as teachers whose practices needed improvement. Thus, participation was in part socially motivated. (Kennedy, 2016, p. 973)

The positioning of teachers in professional learning as a problem is symptomatic of the problem of prescription identified by Kennedy and Timperley. PL programmes that prescribe a readymade solution to a generic problem of practice are rarely effective. The problem with prescription is captured in another example from Kennedy: …coaches in the prescriptive programs used standardized templates to observe and evaluate teachers’ practices and to demonstrate recommended practices, whereas coaches in the strategic programs adopted a collaborative, joint problem-solving approach designed to help teachers develop a more strategic approach to their lessons. (Kennedy, 2016, p. 963)

Kennedy’s review provides ample, rigorous evidence that a collaborative, interactive approach was far more effective than the prescriptive strategy. Timperley (2008) described the importance of a context-specific approach to professional learning involving the careful integration of evidence-based approaches to teacher’s current theories of practice. Critical to this application is the teacher’s continuous assessment of students to examine the impact of evidence-based practices. Timperley (2008) claims that evidence of the improved learning of their students is the only evidence that will convince teachers to refine their existing practices. Both the Timperley and Kennedy reviews of the effectiveness of teacher professional learning are silent on the teacher’s role in defining the problem of practice. There is an implication from both reviews that teacher involvement in their professional learning comes after another agent begins the process with an idea or a solution for an already identified problem of practice. Teacher involvement in the first step of problem identification is fundamental to the instructional rounds (City, Elmore, Fiarman, & Teitel, 2011) and disciplined collaboration (Harris & Jones, 2017) models of professional learning although their methods are different. The method in instructional rounds is to conduct focused observation on classroom practice, what they call the instructional core, and then go through a careful process of description, analysis and prediction before framing a question for the next round of professional learning (City et al., 2011). In effect, this close observation and discussion of practice by teachers constitute the professional learning in instructional rounds. In the disciplined collaboration model, there are three phases: implementation, innovation and impact (Harris & Jones, 2017). Teachers drive this process and in the implementation phase, “teachers establish the group; scrutinize student data and through this, identify students’ learning needs; they agree a method of enquiry and key impact measures, as the process moves forward”(Harris & Jones, 2017, p. 205). Teacher identification of the method of enquiry and key impact measures

64

5 Teacher Professional Learning Using the Teacher Adaptive …

at the implementation phase is critical to the success of the strategy of continuous evaluation in the disciplined collaboration model of teacher professional learning (Harris & Jones, 2017). The interaction of theory and practice is a key principle of professional learning espoused by Timperley (2008). Her stipulation is that the theory used must be rigorous in that it has been tested by experimental methods. The process must also begin with the teacher’s practice, their current theory of action and an assessment of their students’ gaps in achievement. Kennedy (2016) also endorses the strategic use of rigorous theory with a category of professional learning she called “insights”. Insights involve teachers acquiring insights from the literature gathered from professional reading and study groups and applying them to their practice. Kennedy’s review (2016) found that this style of professional learning was as effective as gradually introducing teachers to new teaching strategies and more effective than prescriptive programmes or PL that delivers a body of knowledge to teachers. The one point of disagreement between Kennedy and Timperley is on the use of volunteers or conscripts for professional learning. Kennedy (2016) argues that volunteers will be more motivated to engage in professional learning, so they will produce larger effects on student learning. Timperley (2008) argues that volunteers or conscripts will be motivated by the type of professional learning they experience once they begin the programme and argues that neither volunteers nor conscripts can predict a priori what these conditions will be like. The caveat here is that Kennedy (2016) was more interested in obtaining equal groups for reliable experimental design whilst Timperley (2008) was focused on the efficacy of professional learning design for school contexts. Timperley (2008) proposed another principle in that professional learning should aim to build on a teacher’s sense of self-efficacy in that they take responsibility for their students’ learning. Timperley (2008) argued that self-efficacy is enhanced when teachers can see the immediate impact of their professional learning on their students’ achievement. This is achieved in professional learning when teachers are assisted to use a range of formal and informal assessment strategies that allow them to continuously monitor the impact of their teaching. This continuous evaluation is a feature of the disciplined collaboration model discussed earlier in this section as well as being the foundation principle of implementation science that is the focus of the next section of this chapter. In summary, this brief review of teacher professional learning has identified four key principles that underpin its effectiveness. These principles are the importance of a programme’s theory of action to its success, the power of collaboration, the integration of theory and practice and the positive relationship between self-evaluation and self-efficacy. The next section examines the emerging area of learning implementation and improvement science that is aligned with how teacher professional learning programmes measure their impact.

Implementation and Learning Improvement Science

65

Implementation and Learning Improvement Science Implementation science and its close relative, learning improvement science do not represent radical innovative approaches to teacher professional learning. Instead, they are the science behind the methodology of continuous evaluation of learning interventions. This science has become increasingly important as measures of impact are now a compulsory component of programme evaluation. These approaches to evaluation are important to this chapter as they add new insights to how developers of teacher professional learning can build in these continuous measures as part of their design. Implementation science in education “involves careful policy choices, the rigorous and relentless embedding of those policies and the ability to continually evaluate, refine, and change” (Harris, Jones, Adams, Perera, & Sharma, 2014, p. 886). This capacity to rigorously evaluate educational innovation has been identified with Hong Kong and Singapore that have two of the world’s top performing education systems (Harris et al., 2014). Learning improvement science is sponsored by the Carnegie Foundation and is defined as: …research carried out through networked communities that seeks to accelerate learning about the complex phenomena that generate unsatisfactory outcomes. This research activity forms around an integrated set of principles, methods, organizational norms, and structures. It constitutes a coherent set of ideas as to how practical inquiries should be thought about and carried out. (Bryk, 2015, p. 474)

The creators of learning improvement science describe their improvement cycle as the methodology that produces the required practice-based evidence that provides schools and system with the “know-how” to address variance in student achievement (Bryk, Gomez, Grunow, & LeMahieu, 2015). The difference between evidence-based practice and practice-based evidence is apparent in their respective use of effect sizes of student achievement. As a research measure, effect size gives the evidence-based research community a precise estimation of variance. This approach underlies the recent growth in the number of repositories of best evidence practices available to educators (Institute for Education Sciences, 2018; Institute for Effective Education, 2018; New Zealand Government Ministry of Education, 2018). The focus on effect sizes is plainly evident in Hattie’s ranking of the influences on student achievement (Visible Learning, 2018). Hattie’s work does, however, extend the use of effect sizes to school-based interventions (Hattie, Masters, & Birch, 2016) that changes it from a research measure to a learning improvement measure (Bryk et al., 2015). The commonality between the best evidence repositories, Hattie’s visible learning, learning improvement science and implementation science is that their methodologies all involve a mix of evidence-based practice and practice-based evidence. They all advocate that teachers employ strategies that have been proven to be effective through rigorous randomised control trials, but they recognise that each strategy requires a careful process of implementation to adapt it to its context. This implementation requires different measures than what are used for research or accountability. Learning improvement science calls these different measures improvement

66

5 Teacher Professional Learning Using the Teacher Adaptive …

measures (Bryk et al., 2015) and implementation science calls them pragmatic measures (Albers & Pattuwage, 2017). A comparison of the accountability and research measures with improvement measures is useful for the development of proposals for teacher professional learning that will be presented in the latter part of the chapter. The different purposes of an instrument will influence its design. Accountability measures will generally be broader as they will measure system or institutional objectives such as the Australian Professional Standards for Teachers (APST) with its seven standards and 37 standard descriptors (AITSL, 2011). These measures are broader in scope and do not provide the specificity that characterises improvement measures. In contrast, the teacher adaptive practice scale can be used as an improvement measure as its 14 items focus in on just two standards descriptors of the APST that relate to formative assessment (Sects. 5.2 and 5.4). The teacher adaptive practice scale can be used to provide a more specific elaboration of these two descriptors so that teachers might use the items to map their current strengths and identify areas for improvement. The distinction made in learning improvement science between research and improvement measures is also very useful for this chapter. Researchers need to expend many resources to design an instrument that satisfies the rigorous psychometric criteria for validation evidence (AERA, APA & NCME, 2014). In contrast, learning improvement science aims for smaller, agile measures that have predictive validity rather than construct validity (Bryk et al., 2015). In implementation science, these are called pragmatic measures which are developed to “meet the assessment needs of service providers rather than of researchers. Their core characteristic is a high level of feasibility in real world settings” (Albers & Pattuwage, 2017, p. 21). This feasibility is also reflected in the first two principles of learning improvement science, “wherever possible, learn quickly and cheaply; be minimally intrusive—some changes will fail, and we want to limit negative consequences on individuals’ time and personal lives” (Bryk et al., 2015, p. 120). The consideration of feasibility introduces a cost-benefit analysis to teacher professional learning that is rarely acknowledged in the literature but is a crucial factor in education systems where budgets and time schedules are always tight. Learning improvement science also makes a useful distinction between lead and lag measures. A lead measure “predicts the ultimate outcome of interest but is available on a more immediate basis”, whereas a lag measure “is available only well after an intervention has been initiated” (Bryk et al., 2015, p. 200). The question of validity is brought into play when prominent international lag measures in education (PISA, TIMMS, PIRLS) and national lag measures (NAPLAN) are sometimes invalidly claimed to be predictive when they are just snapshots of student achievement at a point in time (Sahlberg, 2014). In contrast, schools have an existing strength in the use of predictive lead measures with their ongoing assessment of students. The objective of implementation science is to integrate the ongoing assessment of students with the continuous evaluation of teacher professional learning interventions that are happening in the school at the time.

Three Proposed Teacher Professional Learning Models for …

67

Three Proposed Teacher Professional Learning Models for Teacher Adaptive Practice Four principles for effective teacher professional learning were explicated in the previous section. These were the importance of a programme’s theory of action to its success, the power of collaboration, the integration of theory and practice and the positive relationship between self-evaluation and self-efficacy. The concept of continuous evaluation from implementation and learning improvement science was then introduced as a relatively new criterion for teacher professional learning. Proposals for three different teacher professional learning models for adaptive teaching are presented in this section. They are categorized by their theories of action and consider the need for collaboration, the integration of theory and practice and the requirement for continuous evaluation measures.

Peer and Self-evaluation of Teacher Adaptive Practice The theory of action for this teacher professional learning strategy is self-evident from the title. Teachers will self-evaluate and peer-evaluate using the teacher adaptive practice scale. The teacher adaptive practice scale is the continuous evaluation measure, and teachers can collaborate with colleagues during peer evaluation. The integration of theory with practice occurs with the pedagogical theory embodied in the teacher adaptive practice scale and the teacher’s own classroom practice captured in either a live or videotaped classroom lesson. There is a precedent for the use of individual scale items for teacher professional learning with both CLASS and FFT having their own commercial professional learning programmes that use individual items as prompts for teacher improvement. The use of their instruments in this way has some support in the literature, “A strong rationale for this approach is that the individual items are directly anchored to specific instructional practices, whereas total scores or subscale scores may be more difficult to use for feedback” (Halpin & Kieffer, 2015, p. 263). These same researchers caution on reliability posing a risk to inferences made from individual items (Halpin & Kieffer, 2015). CLASS and its accompanying professional learning programme, My Teaching Partner (Curry School of Education University of Virginia, 2018), uses moderated video analysis as a professional learning device. The teacher adaptive practice scale might be used in a similar fashion for self and peer evaluation.

Teacher Adaptive Practice as a Self-evaluation Tool Teacher self-evaluation of video teacher adaptive practices of their own teaching must be potentially the most unobtrusive learning improvement tool. Smartphones allow the teacher to capture and own the video, so there is no teacher anxiety about

68

5 Teacher Professional Learning Using the Teacher Adaptive …

any unintended consequences of being filmed in their classroom. The professional learning is also flexible as the teacher can view the video at a time of their choosing. Rater reliability is the biggest challenge, and one solution to this involves the biggest cost. The teacher who self-evaluates their teaching must be able to compare their rating with those of an expert. This expert judgement can be an asynchronous teacher coach located online as is the case with the CLASS My Teaching Partner professional learning programme (Curry School of Education University of Virginia, 2018). The expert judgement might also reside within video snapshots of that item scored at low, medium and high. There is a resource cost involved with both ventures. The Framework for Teaching (FFT) offers another, less expensive, option for self-evaluation in the form of extensive scoring directions provided at four levels of performance for each of the 22 components across the four domains of the FFT (Danielson, 2013). Each component also has a list of critical attributes as well as possible classroom examples. These excellent materials are designed to be used as prompts for collegial post-observation conversations, but they could also be feasibly used for self-evaluation by individual teachers. The teacher adaptive practice scale does not have the same level of support materials as the well-established CLASS and FFT, and these would need to be developed as part of the next phase of the study. The starting point for this would need to be a refinement of the existing teacher adaptive practice classroom observation guide (see Appendix 1). Attributes and examples can be progressively added as more classroom observations are made using the teacher adaptive practice instrument.

Teacher Adaptive Practice as a Peer-Evaluation Tool Teacher evaluation of peers shares the same low-cost, unobtrusive, unthreatening characteristics of self-evaluation. The use of smartphone to video teacher adaptive practice also opens the possibilities for asynchronous peer feedback without having to try to synchronise timetables for a live classroom observation. The FFT recommends that observers describe rather than analyse. The simplest model advocated by Danielson for FFT is that the observer describes what they see. These descriptive notes are evaluated by the teacher using the FFT, and then, a discussion happens (Danielson, 2012). This process sounds simple, but it very feasibly teacher adaptive practices into the power of teacher agency and collaboration and the theory of the FFT is the common language by which both the observer and the teacher can discuss practice (Danielson, 2011). The common language of the FFT has much more validation evidence than the teacher adaptive practice scale, but the FFT example at least provides a model for the type of feasible, collaborative teacher professional learning that can occur with classroom observation instruments.

Three Proposed Teacher Professional Learning Models for …

69

Whole School Focus on Teacher Adaptive Practice The theory of action for this teacher professional learning strategy is a whole school focus on adaptive teaching. The staff would use the teacher adaptive practice scale as a diagnostic tool to identify areas of practice to work on in collaborative professional learning teams. The continuous measure of evaluation is the teacher adaptive practice scale that also constitutes the integration of theory and practice. Options need to be designed into teacher professional learning to recognise that teachers have existing strengths as well as to recognise that they enjoy being able to choose what they wish to learn like every other learner. Three viable options are outlined here, but they are not intended to be the only options that school learning teams might wish to take with the teacher adaptive practice observation instrument. Option one is to use the teacher adaptive practice scale as a diagnostic tool to explore how comfortable existing staff are with adapting their practice within lessons. The author’s own experience at using the scale in this manner has led to the realisation that many teachers may wish to discuss the feasibility of adaptive practices given the imperatives of time and assessments. The author has used a professional learning activity called Choose Your Own Adventure (Lawrence, 2016, see Appendix 2) to facilitate this discussion. Choose Your Own Adventure depersonalises the discussion by putting the teacher in the role of a mentor who is coaching their mentee to look at the options available to them in a typical lesson scenario. There are no easy answers, and it requires both the role-playing mentee and mentor to consider all options and to support their pedagogical decision making with reasoned responses. It creates the stimulus for a rich discussion that hopefully results in teachers feeling affirmed in their current adaptive practices or more confident that they can be adaptive in the future. The aim in option one is for the teachers to continue to explore adaptive teaching whilst collaborating with their teaching partner or professional learning team with ideas, experiences and offering feedback on progress towards goals via the use of the teacher adaptive practice scale. In this way, the teacher adaptive practice scale becomes the continuous evaluation measure for this teacher professional learning option. Option two presents the scenario where the whole school, or one professional learning team within a school, identifies from the original benchmark data collection that they would like to concentrate their professional learning on the following three items from the teacher adaptive practice scale. Items 5, 6 and 8 belong to a subscale of behaviours that focus on the teacher promoting student open-mindedness and critical and creative thinking: 5. The teacher prompted students to discover key concepts through responsive open-ended questions 6. The teacher prompted students to express their thinking and used this as a springboard for learning activities 8. The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty.

70

5 Teacher Professional Learning Using the Teacher Adaptive …

A professional learning strategy that could be used here would be to focus on teacher questioning skills. Teachers could work on their wait time after asking a question, have students discuss answers in a think-pair-share process and practice asking divergent as well as convergent questions. At the classroom level, the professional learning could focus on creating cultures of thinking (Ritchhart, 2015). The Project Zero team from Harvard have been producing research and free teaching resources for 50 years for the purpose of creating classroom cultures of thinking (Harvard Graduate School of Education, 2018). Connecting your teachers to the Project Zero (PZ) worldwide network significantly enhances the scope of schoolbased professional learning as interested teachers can pursue further learning online or via local PZ groups that exist around the world. The other advantage of the PZ world for the professional learning facilitator is that the content can be modelled using the visible thinking routines (Ritchart, Church, & Morrison, 2011), so participants get to experience the pedagogy at the same time. The author has used a simple presentation as an introduction to cultures of thinking that attempts to model the thinking routines. The workshop uses the thinking routines of I Used To Think-Now I think, ClaimSupport-Question, Zoom In and Headlines (Ritchart et al., 2011) to model how a teacher fosters open-mindedness and tolerance of uncertainty. The workshop uses one image from the Out of Eden Walk to alert teachers to the accompanying Out of Eden Learn programme developed by Project Zero (Dawes Duraisingh, James, & Tishman, 2016). The workshop is a simple seven-slide presentation. The first slide uses the I Used To Think… protocol to have the teachers to think about the question: “What can I do to promote student creativity and curiosity in the classroom?”. The next slide uses Claim-Support-Question on the image of two skeletons buried together in a 1800year grave excavation site. This is the image from Out of Eden (Salopek, 2016). Some participants get quite frustrated when they are not told the story behind the image and instead must look closely to find support for claims they might make about what happened to the couple. The third slide uses the Zoom In protocol to examine more skeletons this time buried with bound hands in a mass grave. The fourth slide is of a skeleton found at the bottom of the ocean, and the participants are asked to choose an appropriate thinking routine that could be used to foster open-mindedness and uncertainty among their students. The fifth slide analyses some of the teacher language that can be used to promote student thinking (Ritchart, 2010). The sixth slide asks the participants to create a headline that summarises the workshop for them, and the final slide asks the participants to complete their Now I Think… response to the question posed at the beginning, “What can I do to promote student creativity and curiosity in the classroom?”. The author does not claim to be an expert on PZ thinking routines, but this simple presentation does engage teacher audiences. It provides the catalyst for collaborative teacher exploration of the whole suite of thinking routines and fruitful discussions on student thinking in their classrooms. Implementation of the thinking routines cannot be the sole aim of this teacher professional learning proposition . They are just the Trojan horse by which cultures of thinking

Three Proposed Teacher Professional Learning Models for …

71

can enter mainstream classrooms. The teacher adaptive practice sub-scale provides a continuous evaluation measure, whereby teachers can check-in on their progress towards this goal. Option three involves another sub-scale of the teacher adaptive practice instrument, but this time it focuses on formative assessment: 1.The teacher modifies learning goals in response to formative assessment 2.The teacher modifies their instructions during the lesson to increase learning opportunities 3. The teacher uses formative assessment to differentiate their responses to individual students 4. The teacher negotiates learning activities with students, ensuring these are aligned with learning goals

Formative assessment in the context of adaptive teaching is the process of “checking-in” or looking for student stimuli that require a teacher response. The response to student stimuli is the adaptive practice. Fortunately, there are many existing professional learning resources for formative assessment. These resources can be found in the media with Dylan Wiliam’s famous Classroom Experiment documentary (BBC Two, 2018) as well as through conventional print sources (Moss & Brookhart, 2010; Wiliam, 2002). Wiliam argues that teachers should be given control of their professional learning by allowing them to choose which formative assessment strategies they would like to implement (Wiliam, 2006). According to Wiliam, the only accountability for this professional learning should be a teacher colleagues asking, “is this formative?”. This form of accountability encourages the collaboration necessary for effective teacher professional learning. The question can also be answered with reference to evidence generated from the four teacher adaptive practice items that are the continuous evaluation measures for this option.

Enhancing Awareness of Personal and Behavioural Adaptability The final proposal for a teacher professional learning strategy focuses on teacher’s personal dispositions that have proven indirect and direct links to teacher adaptive practice. The theory of action in this professional learning model is that teachers who have well-developed epistemic cognition will be able to promote this trait in the students they teach. The collaboration in this model occurs in a one-to-one relationship with a teacher coach, and the scales used for development are also the continuous evaluation measures. The personal disposition measures used in this proposal are the teacher selfefficacy and perceived autonomy scales that have an indirect positive relationship to teacher adaptive practices via teacher adaptability (see Chap. 4). The teacher adaptability scale is also used as it is a positive predictor of teacher adaptive practice. The teacher coaching in this model uses the GROW model. The letters GROW represent goal, reality, options and where next? It is based on the growth mindset

72

5 Teacher Professional Learning Using the Teacher Adaptive …

model for learning established by Dweck (2006) that has recently been shown to improve the educational achievement of students from impoverished backgrounds (Paunesku et al., 2015). The GROW model suits teacher coaching with a focus on continuous, incremental improvement in practice. The selection of coaches for this teacher professional learning model is critical, mindful of the warning that little is known about how teacher coaches “are selected, how they are prepared for their work, or how their efficacy is assessed” (Kennedy, 2016, p. 973). It is known that effective teacher coaches are skilful at providing feedback (Cohen & Goldhaber, 2016) but will need to be supported by a facilitator in this model until they are familiar with the suite of instruments to be deployed. The baseline data for this model will be gathered using the three measures of personal dispositions as well as from a classroom observation using the teacher adaptive practice scale. These data will be considered together, and the first goal for professional learning of the teacher will be determined in a conversation between the coach and teacher. The reality for the teacher might be that they perceive a lack of support from their supervisors to be adaptive or they do not have a sufficient sense of self-efficacy with some classes to be adaptive. It may also be that their own sense of adaptability requires development in the cognitive or affective domains. The options for the teacher’s professional learning will once again be the result of a discussion between teacher and coach. For some teachers, it may be a matter of the awarenessraising in the coaching conversation that makes a difference. Other teachers may need an intervention by the coach to improve their supervision arrangements. Other teachers may need some professional dialogue around growth mindsets given the reality that “teachers’ own epistemic beliefs predict their likelihood of endorsing critical thinking as a desired instructional outcome, and their likelihood of using pedagogies that promote critical thinking” (Greene & Yu, 2016, p. 49). Whatever is the agreed outcome of the coaching conversation becomes the goals for the next coaching conversation including the actions committed to by both parties. This is the “where next?” aspect of the GROW teaching model. The next cycle of professional learning begins with another cycle of data collection using the personal measures that are part of the new goals as well as a classroom observation using the teacher adaptive practice scale. The behavioural change as measured by the teacher adaptive practice scale is the outcome measure for this teaching’s learning proposal, but this change is driven by the increase in personal awareness of the teacher that occurs as a product of the GROW coaching conversation.

Conclusion This chapter reviewed the principles of effective teacher professional learning and the nascent learning improvement and implementation science. The synthesis of both principles revealed the importance of continuous data collection to inform the collaborative application of both evidence-based and practice-based teaching innovations.

Conclusion

73

Continuous evaluation is supported using the teacher adaptive practice scale that can be both an improvement as well as a research measure. Three proposals for teacher professional learning using the teacher adaptive practice scale were presented in the second part of the chapter. These proposals’ theories of action involved collaborative teacher learning that used the teacher adaptive practice scale as a diagnostic as well as an evaluation measure. They pass the feasibility test as they all could be operationalized within existing school resources apart from some initial training in the use of the teacher adaptive practice scale.

Appendix 1: Teacher Adaptive Practice Classroom Observation Scoring Guide

Indicator

Low

High

1

The teacher modifies learning goals in response to formative assessment

Teacher did not undertake any formative assessment

Teacher checks for student understanding and makes changes to the lesson in response

2

The teacher modifies their instructions during the lesson to increase learning opportunities

Instructions are given once and in one modality to the whole class

The teacher did an impromptu demonstration to a small group using the classroom globe in response to student questions about international time zones

3

The teacher uses formative assessment to differentiate their responses to individual students

The teacher asks students to move to the true or false side of the room but does not follow up with why questions

Teacher sets Do Now task at the beginning of the lesson, helps students with the task and asks questions about the task when all students have attempted it

4

The teacher negotiates learning activities with students, ensuring these are aligned with learning goals

All students completed the same activity at the same time

The teacher used students’ misconceptions as a guide to the learning activity that was chosen

5

The teacher prompted students to discover key concepts through responsive open-ended questions

Teacher used shallow questions that did not require deep conceptual responses from the students

“Why is it expensive to make things in Australia?” “How has technology changed religion?” “In which direction does the water flow into the drain in the Northern and Southern Hemisphere?” (continued)

74

5 Teacher Professional Learning Using the Teacher Adaptive …

(continued) Indicator

Low

High

6

The teacher prompted students to express their thinking and used this as a springboard for learning activities

The teacher used “guess what is in my head” questions; “It starts with…?”

The teacher asked the students to annotate their notes with an “E” if they required more evidence

7

The teacher uses a thinking routine to prompt deeper exploration of concepts or skills

“The steps I would like you to take are: decode, position, read the poem, write your response”

Teacher used a “See, Think, Wonder” to prompt students to think meteacher adaptive practicehorically on a concept

8

The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty

Teacher answered big science questions directly instead of asking them why

The teacher explored the different definitions of a concept evident across different sources to demonstrate the contested and uncertain nature of it

9

The teacher provided a synthesis of class generated ideas

Teacher uses initiate, response, evaluate to individual student answers

“I feel if we joined these last three responses we should have a good answer on identity”

10

The teacher links, when appropriate, lesson concepts to larger disciplinary ideas

Teacher talk focused on the execution of the learning activity rather than the underlying big idea

The teacher linked the preservation of vegetables by bottling to the chemical processes

11

The teacher provided analogies and meteacher adaptive practicehors to increase learning opportunities

Teacher does not use analogy and meteacher adaptive practicehor when the opportunity arises

The teacher used an image of a waterfall to assist student understanding of the life cycle of a business. The teacher role played a character in the text to expand understanding

12

The teacher demonstrated flexible pacing of lesson in response to student learning needs

Teacher adheres to their script without checking-in with students to see if they understood the concept

The duration of each learning activity is contingent on student understanding

13

The teacher demonstrated responsive use of literacy/numeracy interventions

No dynamic literacy/numeracy intervention evident

Teacher identified the word “essential” as expressing high modality Teacher used a think-aloud process to identity story retelling in literary analysis as a practice to be avoided

14

The teacher creates groups of students based upon formative assessment

Students not grouped or are in previously assigned table groups

Students moved into groups based on a self-rating of their knowledge

Appendix 2: Choose Your Own Adventure

75

Appendix 2: Choose Your Own Adventure Adapted from Lawrence, P. (2016). Teaching Moves That Matter. Choose Your Own Adventure. Retrieved from http://lawrenceloveslearning.blogspot.com.au/2016/10/ teaching-moves-that-matter-choose-your.html.

Goal Your mentee wants some guidance on their ability to interpret student learning and adapt teaching based on their needs. This relates to the APST standard descriptor: 5.4.2—Use student assessment data to analyse and evaluate student understanding of subject/content, identifying interventions and modifying teaching practice.

Reality It is the second last day of day of term. Period 3 begins right on the heels of an inadequately short recess. A group of slightly damp (it is raining) and very dishevelled Year 7 students bump into class, take their seats and look expectantly at your mentee, their science teacher, as they write keywords and concepts on the whiteboard. The learning intention is clear. The students will be following up last lesson’s work on food chains. This lesson the focus is on using the language of cause and effect to show the relationship between producers and consumers. It is obvious to the students that your mentee has a plan. The pieces are all in place. The neat linear lesson planned progression should unfold as effortlessly as the food chain itself. Skilfully and without much fuss, they move around the room checking the “Do Now” three-minute sprint writing task. Your mentee says, “Don’t worry about the words, everyone. Just get down what you can remember about the food chain from yesterday’s lesson. The aim here is to write as much as you can.” As your mentee looks at the students’ pages, they notice just how different their current understanding of the food chain is.

76

5 Teacher Professional Learning Using the Teacher Adaptive …

Choice of adventure

Your mentor decides to continue with their plan and proceed to the literacy task… go to page 5

Your mentor decides to pause, place the boys into small groups based on the “Do Now” activity and create learning circles prior to the literacy task… go to page 6

Your mentor decides to use the stronger responses to quickly model the answer on the board for the class… go to page 7

Pause n Prompt: What questions could you ask around this choice in the moment? …draw out the metacognition or awareness behind the choice

Reality

Page 5

Page 6

Page 7

Your mentee congratulates the class on finishing the Do Now task and hands out the literacy close passage sheet on the food chain. Your mentee works quickly, sharing their answers and writing in the missing words. Each student has had a chance to recap the key concept via the close passage text. Your mentee feels confident they can attempt this writing task on their own in the upcoming assessment

The class moves quickly into the new learning circles. Your mentee knows this is their chance to take their learning further. “Ok, so for this group are going to watch the short clip from the last lesson on what a food chain is. You can help each other out with this diagram. Write in the names of the different producers and consumers you see”. Your mentee knows here that the students are repeating the surface learning from the last lesson. They move across to the group at the back. “Ok boys, I can see you know this stuff from your last lesson and can use the linking words already. I want to give you guys a thinking challenge. We are going to use the claim, support, question format we use all the time to prepare your answer. Where does the energy go in the transfer in the food chain?” Your mentee knows here that they are giving this group the chance to go deeper, using their understanding to consider new concepts. They stand back, look around the room and feel confident that each student has taken a step further in their understanding of the food chain

Your mentee stands up the front and asks three students to read aloud their responses. Having noticed the strength of their answers they use these as a model to jointly construct an answer on the board. The students take time to copy this model answer down and then proceed onto the literacy close passage task. Your mentee has modelled for the class what moving from surface to deep learning looks like. They know that at the very least the students have recorded a strong response and can refer to this later when attempting an independent task

Appendix 2: Choose Your Own Adventure

77

Options What could you ask your mentee in the post-lesson conversation to help them reflect on their adventure and assist their future practice?

Where next? Ask questions that help your mentee choose their next PL goal? Identify a standard descriptor as a goal and identify the steps you will take as a coach to help them reach this goal

References AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington. D.C.: AERA. AITSL. (2011). Australian professional standards for teachers. Melbourne: AITSL. Albers, B., & Pattuwage, L. (2017). Implementation in education: Findings from a scoping review. Retrieved from Melbourne: http://www.ceiglobal.org/application/files/2514/9793/4848/Albersand-Pattuwage-2017-Implementation-in-Education.pdf. BBC Two. (2018). The classroom experiment. Bryk, A. S. (2015). 2014 AERA distinguished lecture accelerating how we learn to improve. Educational Researcher, 44(9), 467–477. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Graduate Education School: Harvard Education Press. City, E. A., Elmore, R. F., Fiarman, S. E., & Teitel, L. (2011). Instructional rounds in education. A network approach to improving teaching and learning. Cambridge, MA.: Harvard Education Press. Cohen, J., & Goldhaber, D. (2016). Building a more complete understanding of teacher evaluation using classroom observations. Educational Researcher, 45(6), 378–387. https://doi.org/10.3102/ 0013189x16659442. Cole, P. (2012). Linking effective professional learning with effective teaching practice. Melbourne: Australian Institute for Teaching and School Leadership. Curry School of Education University of Virginia. (2018). My teaching partner. Retrieved from https://curry.virginia.edu/myteachingpartner. Danielson, C. (2011). Evaluations that help teachers learn. Educational Leadership, 68(4), 35–39. Danielson, C. (2012). Observing classroom practice. Educational Leadership, 70(3), 32–37.

78

5 Teacher Professional Learning Using the Teacher Adaptive …

Danielson, C. (2013). The framework for teacher evaluation instrument (2013th ed.). Princeton, NJ: The Danileson Group. Dawes Duraisingh, S., James, C., & Tishman, S. (2016). Out of Eden learn: An innovative model for promoting crosscultural inquiry and exchange Retrieved from http://pz.harvard.edu/ sites/default/files/Out%20of%20Eden%20Learn%20white%20paper%20May%202016%20% 28with%20links%29%281%29.pdf. Durksen, T. L., Klassen, R. M., & Daniels, L. M. (2017). Motivation and collaboration: The keys to a developmental framework for teachers’ professional learning. Teaching and Teacher Education, 67, 53–66. https://doi.org/10.1016/j.tate.2017.05.011. Dweck, C. S. (2006). Mindset. New York: Random House. Goddard, R. D., Hoy, W. K., & Hoy, A. W. (2004). Collective efficacy beliefs: Theoretical developments, empirical evidence, and future directions. Educational Researcher, 33(3), 3–13. Greene, J. A., & Yu, S. B. (2016). Educating critical thinkers: The role of epistemic cognition. Policy Insights from the Behavioral and Brain Sciences, 3(1), 45–53. Halpin, P. F., & Kieffer, M. J. (2015). Describing profiles of instructional practice. Educational Researcher, 44(5), 263–277. https://doi.org/10.3102/0013189x15590804. Harris, A., & Jones, M. (2017). Disciplined collaboration and inquiry: Evaluating the impact of professional learning. Journal of Professional Capital and Community, 2(4), 200–214. https:// doi.org/10.1108/jpcc-05-2017-0011. Harris, A., Jones, M. S., Adams, D., Perera, C. J., & Sharma, S. (2014). High-Performing Education Systems in Asia: Leadership Art meets Implementation Science. The Asia-Pacific Education Researcher, 23(4), 861–869. https://doi.org/10.1007/s40299-014-0209-y. Harvard Graduate School of Education. (2018). Project Zero. Retrieved from http://www.pz.harvard. edu/. Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship of Teaching and Learning in Psychology, 1(1), 79–91. https://doi.org/10.1037/stl0000021. Hattie, J., Masters, D., & Birch, K. (2016). Visible learning into action. International case studies of impact. London: Routledge. Institute for Education Sciences. (2018). What Works Clearinghouse. Retrieved from https://ies.ed. gov/ncee/wwc/. Institute for Effective Education. (2018). Best evidence encyclopedia. Retrieved from http:// bestevidence.org.uk/index.html. Kennedy, M. M. (2016). How does professional development improve teaching? Review of Educational Research, 86(4), 945–980. https://doi.org/10.3102/0034654315626800. Klassen, R. M., Chong, W. H., Huan, V. S., Wong, I., Kates, A., & Hannok, W. (2008). Motivation beliefs of secondary school teachers in Canada and Singapore: A mixed methods study. Teaching and Teacher Education, 24(7), 1919–1934. https://doi.org/10.1016/j.tate.2008.01.005. Lawrence, P. (2016). Teaching moves that matter. Choose your own adventure. Retrieved from http:// lawrenceloveslearning.blogspot.com.au/2016/10/teaching-moves-that-matter-choose-your.html. Moss, C. M., & Brookhart, S. M. (2010). Advancing formative assessment in every classroom: A guide for instructional leaders: ASCD. New Zealand Government Ministry of Education. (2018). BES (Iterative Best Evidence Synthesis) Programme—What Works Evidence—Hei Kete Raukura. Retrieved from https://www. educationcounts.govt.nz/topics/BES. Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic under achievement. Psychological Science, 26(6), 784–793. https://doi.org/10.1177/0956797615571017. Ritchart, R. (2010). The Language of the Classroom. Ritchart, R., Church, M., & Morrison, K. (2011). Making thinking visible. How to promote engagement, understanding, and independence for all learners. San Francisco: Jossey Bass. Ritchhart, R. (2015). Creating cultures of thinking: The 8 forces we must master to truly transform our schools: Wiley.

References

79

Sahlberg, P. (2014). Finnish lessons 2.0. What can the world learn from educational change in Finland? New York: Teachers College Press. Salopek, P. (2016). Visiting a Couple Locked in an 1800-Year-Old Embrace. Out of Eden Walk. Retrieved from https://www.nationalgeographic.org/projects/out-of-eden-walk/articles/2016-02visiting-a-couple-locked-in-an-1800-year-old-embrace. Timperley, H. (2008). Teacher professional learning and development. Retrieved from http:// unesdoc.unesco.org/images/0017/001791/179161e.pdf. Visible Learning. (2018). Hattie ranking: 195 influences and effect sizes related to student achievement. Retrieved from https://visible-learning.org/hattie-ranking-influences-effect-sizes-learningachievement/. Wiliam, D. (2002). Embedded formative assessment. Bloomington: Solution Tree Press. Wiliam, D. (2006). Assessment for learning: Why, what, and how? Orbit, 36, 2–6.

Chapter 6

Looking Forward: Next Steps for Teacher Adaptive Practice Research

Abstract This book has examined the relationship between some of the personal, environmental and behavioural determinants of adaptive teaching. It found that there is a strong relationship between the personal and environmental determinants tested in this study. The relationship between these determinants and the behavioural determinant of teacher adaptive practices was moderate. The future directions for research into a model of adaptive teaching will investigate the relationship between existing and new personal and behavioural determinants. Future refinements in the methodologies adopted for these studies are examined in the last part of this chapter. Keywords Teacher Adaptive Practices · Adaptive Teaching · Creative and Critical Thinking

Introduction The interaction of selected personal, environmental and behavioural determinants of adaptive teaching was investigated in this study. The results reinforced the strong interrelationships between extant personal and environmental determinants of adaptive teaching and opened the possibility for the development of new behavioural determinants. This study replicated the finding of previous research (Collie & Martin, 2017) that professional autonomy support predicts teacher adaptability. In addition, this study contributed to the finding that teacher self-efficacy also predicts teacher adaptability. Furthermore, teacher adaptability predicts a sub-scale of teacher adaptive practices that potentially lead to student’s critical and creative thinking. These findings establish a foundation for the conceptual model of adaptive teaching used in this study, but there is scope for the testing of additional personal and behavioural determinants in this model under different classroom conditions. These tests should be designed with the objective of gathering both self-report and external measures to increase the validity of the findings. These new determinants are examined in the first part of the chapter before future methodological refinements are foreshadowed in the latter section. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2019 T. Loughland, Teacher Adaptive Practices, SpringerBriefs in Education, https://doi.org/10.1007/978-981-13-6858-5_6

81

82

6 Looking Forward: Next Steps for Teacher Adaptive …

Examining Teacher Effectiveness in Entrepreneurial Education and Training The classroom environment was suggested as a contributing factor of teacher adaptability when face validation evidence was generated from expert teacher groups at the very beginning of the study (Loughland & Vlies, 2016). It was not examined as an independent variable in the first study as the focus was on generating a valid and reliable TAP scale. The global emergence of entrepreneurial education and training provides one classroom environment that might be investigated as a possible influence on teacher adaptive practices in phase two of the study. Entrepreneurial education and training (EET) has been promoted as a pedagogy to promote critical and creative thinking in school graduates (Zhao, 2012). EET is positioned as a possible means to attain the so-called twenty-first-century learning skills of creativity, critical thinking, communication and collaboration (National Education Association, 2010). The validation of the creative and critical thinking (CCT) sub-scale of the TAP in this study provides an external measure by which the effectiveness of EET might be measured. In this proposal, EET is the environment in which the conceptual model of adaptive teaching is clearly focused on promoting student’s critical and creative thinking. There is also a chance of testing reciprocal causation if the design of the research included an intervention to promote the teachers’ use of the behaviours in the CCT sub-scale. It will also add to the evidence base for the under-researched area of EET in secondary schools (Elert, Andersson, & Wennberg, 2015). The existing research base on the impact of EET in schools on students is sparse but rigorous. The evidence published has been collected through experimental, quasiexperimental studies or through correlational studies of larger existing databases. One experimental study found that EET in the last year of primary schools had a positive effect on nine non-cognitive skills of which creativity was one (Huber, Sloof, & Van Praag, 2014). The effect size for creativity was 0.1 (Huber et al., 2014). The same study found no impact on students’ entrepreneurial knowledge. A fascinating study of entrepreneurial intentions exploited the unification of Germany to create a control group of randomly selected East German students’ preunification and a “treatment” group of West German students exposed to the free market ideology of their schooling system. The treatment group was more likely to have entrepreneurial intentions (Falck, Gold, & Heblich, 2017). The intentions of East German students’ post-unification demonstrated a convergence towards their West German counterparts (Falck et al., 2017). This study positions the institutional environment of school as a factor influencing the outcome of entrepreneurial intentions. It will be interesting to examine if the classroom environment of EET has the same impact on teachers’ promotion of student’s critical and creative thinking. An outcome measure of student creativity would add validity to this research proposal on EET as the environmental context in which a model of adaptive teaching is tested. There are existing measures such as the self-report questions on creativity used in Huber et al. (2014) or another self-report measure in the Torrance Test of

Examining Teacher Effectiveness in Entrepreneurial …

83

Creative Thinking (Torrance, 1972). Alternatively, a less reliable but more valid measure of student creative and critical thinking would be an assessment of their process and product in the EET context using the creativity wheel (Lucas, Claxton, & Spencer, 2013).

Personal Determinants of Adaptive Teaching A major finding of this study was that the personal determinant of teacher adaptability predicted a sub-scale of TAP relating to student’s critical and creative thinking. In addition, teacher adaptability was predicted by another personal determinant in teacher self-efficacy. It would be worthwhile to test these relationships using a different measure of adaptability as well as investigate two other potential personal determinants of adaptive teaching in epistemic cognition and mindset. Another method of measuring teacher adaptability is via a situated judgment test (SJT). SJTs have been used in the selection of employees as well as candidates for high-status professional training programmes in medicine, law and, more recently, education (Klassen et al., 2017). They are recognised for their high predictive validity for selection programmes and are less prone to faking or gaming by candidates (Klassen et al., 2017). They also have high face and content validity as the complex items for each situated test are sourced from experts in their respective fields. A proof-of-concept SJT that included the domains of adaptability and resilience has already been developed. The edited description of this domain with the relevant adaptability traits is given here: Demonstrates adaptability, and an ability to change lessons (and the sequence of lessons) accordingly where required. Candidate has an awareness of their own level of competence and the confidence to either seek assistance, or make decisions independently, as appropriate. Is comfortable with challenges to own knowledge and is not disabled by constructive, critical feedback. (Klassen et al., 2017, p. 104)

Items for the SJT on adaptability could be developed from the observation data already collected in phase one of this study. This would create another measure of teacher adaptability with the possibility of the measure being used as a teacher selection measure for schools, systems or teacher training programmes in the future. Epistemic cognition is another personal trait of teachers that could be examined through a situated judgment test. Epistemic cognition “is a process involving dispositions, beliefs, and skills regarding how individuals determine what they actually know, versus what they believe, doubt, or distrust” (Greene & Yu, 2016, p. 46). Greene and Yu (2016) have defined four levels of epistemic cognition: realist, absolutist, multiplist and evaluativist. The evaluativist can evaluate both subjective and objective sources of knowledge and come to a reasoned position. The evaluativist is a philosopher, but the open-mindedness and willingness to probe deeply are also traits of an adaptive teacher who promotes creative and critical thinking in their students.

84

6 Looking Forward: Next Steps for Teacher Adaptive …

The measurement of epistemic cognition through a situated judgment test would provide a strong methodological challenge but could provide researchers with a clear view of the epistemic view of adaptive teachers. This would help in both the selection and professional learning of teachers. Mindset is the final personal construct of teachers that is worthy of examination as a possible personal determinant of the adaptive teacher. Carol Dweck’s research (Dweck, 2006) on fixed and growth mindsets is one theory that has translated extremely well into the classroom. It is chiefly employed as a student motivational strategy with the teachers encouraging students to adopt a growth mindset to their learning. It is less commonly applied to the teacher, but it seems axiomatic that an adaptive teacher would have a growth mindset as they are prompting students in the hope that they can build on the knowledge they present as teachers and the knowledge students already possess. This proposition needs to be measured. Growth mindset was such a simple yet elegant revelation that empirical investigation of the idea in schools did not progress past proof-of-concept studies until more recently. A recent intervention study of 1594 students across 13 geographically diverse high schools has produced the evidence that a growth mindset has a positive effect on student grade point averages among underperforming students (Paunesku et al., 2015). The intervention was a 45-min online activity, so the study did not investigate the teacher as an independent variable. The study did, however, employ two questions on growth mindset that could be included in a teacher questionnaire: “You can learn new things, but you can’t really change your basic intelligence” and “You have a certain amount of intelligence and you really can’t do much to change it”(Paunesku et al., 2015, p. 787). These two questions may be expanded to four items to get a more valid measure of the construct. Mindset as a personal determinant of adaptive teaching is both a research and improvement measure. Growth mindset is already well understood by teachers with regard to their students, so transferring that to their own cognition should not be as difficult as introducing a new concept in professional learning.

Behavioural Determinants of Adaptive Teaching There are two refinements to the constructs used to measure the behavioural determinants of adaptive teaching that will increase the rigour of the study. They are a revision of the TAP scale and the use of student rating scales to measure teacher adaptive practices. As it stands, the TAP A sub-scale of creative and critical thinking is the only behavioural determinant of teacher effectiveness predicted by the personal determinant of teacher adaptability. There is a strong correlation between the TAP B and TAP C scales and teacher self-efficacy that requires further investigation. These items will form the basis of a revised TAP scale (Table 6.1). The revised TAP scale could be used with these seven items with a focus on the ongoing collection of validation evidence linking the scale to both personal

Behavioural Determinants of Adaptive Teaching

85

Table 6.1 Revised TAP scale Sub-scale

Original item#

Descriptor

A

5

The teacher prompted students to discover key concepts through responsive open-ended questions

A

6

The teacher prompted students to express their thinking and used this as a springboard for learning activities

A

8

The teacher prompted students to demonstrate open-mindedness and tolerance of uncertainty

B

2

The teacher modifies their instructions during the lesson to increase learning opportunities

B

3

The teacher uses formative assessment to differentiate their responses to individual students

C

1

The teacher modifies learning goals in response to formative assessment

C

4

The teacher negotiates learning activities with students, ensuring these are aligned with learning goals

and environmental determinants of adaptive teaching. The sub-scales might also be used as discrete measures for specific purposes. The use of TAP A as a measure of a teacher’s promotion of student creative and critical thinking has already been flagged. TAP B and TAP C could be used to measure interventions in professional learning that focus on formative assessment. Items may also be added to the revised TAP scale depending on the outcome of another review of the literature. These additional items will require validation evidence on their content and their internal consistency with the older items. Student rating of teacher adaptive practices would provide another valid and reliable external measure of a behavioural determinant of adaptive teaching. One of the more surprising findings of the MET study was the validity and reliability of the data generated from student ratings (Bill & Melinda Gates Foundation, 2012). Student ratings were shown to predict student achievement such that teachers in the top 25% taught students whose average gains in numeracy were 4.6 months ahead of the students taught by the teachers in the bottom 25%. The equivalent gains for literacy were half of the size of the numeracy (Bill & Melinda Gates Foundation, 2012, p. 9). The MET study also found that student ratings were more reliable (or consistent) over time than measures of student achievement gains or classroom observations data (Bill & Melinda Gates Foundation, 2012). A single administration of a student survey is more reliable than classroom observation because there are multiple raters. These multiple raters are also very feasible as they are already in the class and mostly value the privilege of rating their teacher. A student rating survey based on the revised TAP scale is given in Table 6.2. This scale would first require evidence of its face validity and usability with school students before it is used in a study.

86

6 Looking Forward: Next Steps for Teacher Adaptive …

Table 6.2 Student rating of teacher adaptive practices Sub-scale

Original item#

Descriptor

A

5

My teacher asks questions that help us understand and talk about big ideas

A

6

My teacher likes us to talk about our thinking and uses this as a springboard for learning activities

A

8

My teacher encourages us to be open-minded

B

2

My teacher changes their instructions during the lesson to help us learn

B

3

My teacher helps each of us according to what we need

C

1

My teacher modifies the lesson when it is not working

C

4

My teacher allows us to modify the learning activities

Thoughts on Future Study Methodology Phase one of this study has provided useful insights into the methodological efforts required to gather valid and reliable classroom observation data. These insights have prompted thoughts on how feasibility, validity and reliability can be enhanced in the next phase of the study.

Feasibility The author can attest that it is time-consuming to collect 270 classroom observations of teachers. The creators of the CLASS instrument with many more observations to their credit acknowledge the same point, “The single largest cost center is the actual visit (placing a live observer in a classroom setting)” (Pianta & Hamre, 2009, p. 114). This reality is not going to change soon, so the options for phase two are to gather the data by video captured by teachers themselves or to use another external measure of teacher behaviours like student rating surveys. Gathering videos of teaching is definitely feasible as the MET study obtained 23,000 videos, albeit with the backing of Bill and Melinda Gates. It is feasible to ask teachers to have one of their students record their lesson on a smartphone and send it to the researchers via email. Gaining informed consent and ensuring sufficient response rates are two obvious challenges to this method, but it is still preferable to the cost of being in the classroom as a live observer. The other benefit of video is that it enables the creation of a video library that can be used for online professional learning as is the case with the CLASS My Teaching Partner PD programme (Curry School of Education University of Virginia, 2018).

Thoughts on Future Study Methodology

87

The online PL could also involve feedback on the video the teacher submits for the research, thus creating an immediate link between research and improvement. This will also assist with gaining the informed consent of the teachers if they know there will be some benefit gained from their participation. The validity and reliability of student rating surveys demonstrated in the MET study demand that strong consideration be given to them as an alternative or additional measure to classroom observation. This is acknowledged in the literature, “student rating surveys provide more efficient assessments of quality compared with alternative, resource-intensive assessments such as classroom observations” (Wallace, Kelcey, & Ruzek, 2016, p. 1837). The dual use of classroom observation as a research and improvement measure seems a counter-argument to this until one discovers that My Teaching Partner also uses TRIPOD student survey data to initiate an online teacher coaching cycle (Bill & Melinda Gates Foundation, 2012). This coaching cycle uses the seven constructs of TRIPOD to offer feedback on teacher videos so that the student rating measure is consistent with the observation measure (Bill & Melinda Gates Foundation, 2012). It is good to have methodological choices that seem to meet the needs of both researcher and the teacher.

Validity Chapter 3 used the authoritative test standards (AERA APA & NCME, 2014) as evaluative criteria for extant classroom observation frameworks as well as for the prospective TAP scale. The unitary definition of validity has been part of the test standards since at least 1999, but it is still common to read about seemingly fixed properties of the content, face and construct validity in research papers. In this respect, it seems that “test validation theory appears to have outstripped practice” (Baird, Andrich, Hopfenbeck, & Stobart, 2017, p. 321). However, it was useful in this study to think about the five forms of validation evidence even when there was a tension between creating a research and improvement measure. The source of the greatest tension was the need to conduct 250 classroom observations to ensure correlation stability when seeking evidence of the relationship of TAP to TSE, TA and PAS. This is a requirement of a research measure, whereas if it was an improvement measure, there would have been the opportunity to make modifications to the items along the way. Phase two of the study requires student outcome measures of creative and critical thinking to improve validity. Easy to obtain measures such as literacy and numeracy scores would not match the proposition of the study that adaptive teaching leads to student creative and critical thinking. The answer is not clear, but it may be clearer when a raft of existing measures of creativity is employed in phase two. The challenge for valid student outcome measures of critical thinking continues.

88

6 Looking Forward: Next Steps for Teacher Adaptive …

Reliability Inter-rater and intra-rater reliability of the revised TAP scale will be enhanced if videos of classroom teaching are gathered. This will enable the creation of resources for training raters in the beginning and for checking their fidelity throughout their tenure. The adoption of a seven-point score scale rather with low-, medium- and high-scoring directions would also enhance the reliability of the observation data.

Some Final Thoughts on the Translation of Adaptability into Teacher Practice I was imbued with what can only be called naïve optimism at the beginning of this study in November 2015. The first pilot validation studies were conducted in schools in regional areas in the state of NSW, Australia, and the bucolic rural setting and friendly teachers represented a honeymoon beginning to this long project. The long grind of two years of data collection ensued, but there was one truth evident in those early days that remained until the last. Teachers could recognise the face validity of researching adaptive practices and were supportive of the project. This recognition was embodied when I was sitting in the staffroom in a rural school reading my notes whilst waiting for the next scheduled classroom observation. A friendly teacher who was a graduate of the university where I work struck up a conversation in the staffroom and asked me what my research was about. After my typically verbose explanation, she said, “Yeah I get it, it’s when the teacher follows the student and not the script”. It has been worthwhile to attempt to establish the validity of teacher adaptive practices as an extension of the teacher disposition of adaptability. It has made me think closely about the broader purpose of adaptive teaching, its personal, environmental and behavioural determinants and how these might be measured. It is also heartening to see schools that value creative and critical thinking as an important graduate outcome alongside many other worthy goals. Schools see the worth in entertaining the aspirations of researchers such as myself because they can translate what it might mean for their students. To this end, the establishment of research and improvement measures enables talented practitioners to translate the ideals of academic researchers to the rich outcomes of practice. I have been privileged to witness many wonderful lessons over the last two years. It is humbling to see a great teacher coax magnificent thoughts and ideas out of their students in basic classrooms with the minimum amount of resources. For the students and this observer in the class, it is like being conveyed to a parallel universe where all normal rules are suspended, where it is possible to be anything and anything is possible. This is when I know that this research is worthwhile.

References

89

References AERA APA & NCME. (2014). Standards for educational and psychological testing. Washington D.C.: AERA. Baird, J.-A., Andrich, D., Hopfenbeck, T. N., & Stobart, G. (2017). Assessment and learning: Fields apart? Assessment in Education: Principles, Policy & Practice, 24(3), 317–350. https://doi.org/ 10.1080/0969594X.2017.1319337. Bill & Melinda Gates Foundation. (2012). Student perception surveys and their implementation. Retrieved from www.metproject.org. Collie, R. J., & Martin, A. J. (2017). Teachers’ sense of adaptability: Examining links with perceived autonomy support, teachers’ psychological functioning, and students’ numeracy achievement. Learning and Individual Differences, 55, 29–39. https://doi.org/10.1016/j.lindif.2017.03.003. Curry School of Education University of Virginia. (2018). My teaching partner. Retrieved from https://curry.virginia.edu/myteachingpartner. Dweck, C. S. (2006). Mindset. New York: Random House. Elert, N., Andersson, F. W., & Wennberg, K. (2015). The impact of entrepreneurship education in high school on long-term entrepreneurial performance. Journal of Economic Behavior & Organization, 111, 209–223. https://doi.org/10.1016/j.jebo.2014.12.020. Falck, O., Gold, R., & Heblich, S. (2017). Lifting the iron curtain: School-age education and entrepreneurial intentions. Journal of Economic Geography, 17(5), 1111–1148. https://doi.org/ 10.1093/jeg/lbw026. Greene, J. A., & Yu, S. B. (2016). Educating critical thinkers: The role of epistemic cognition. Policy Insights from the Behavioral and Brain Sciences, 3(1), 45–53. Huber, L. R., Sloof, R., & Van Praag, M. (2014). The effect of early entrepreneurship education: Evidence from a field experiment. European Economic Review, 72, 76–97. https://doi.org/10. 1016/j.euroecorev.2014.09.002. Klassen, R., Durksen, T., Kim, L. E., Patterson, F., Rowett, E., Warwick, J., et al. (2017). Developing a proof-of-concept selection test for entry into primary teacher education programs. International Journal of Assessment Tools in Education (IJATE), 4(2), 96–114. Loughland, T., & Vlies, P. (2016). The validation of a classroom observation instrument based on the construct of teacher adaptive practice. The Educational and Developmental Psychologist, 33(2), 163–177. https://doi.org/10.1017/edp.2016.18. Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in schools: First steps towards new forms of formative assessments. OECD: OECD publishing. https://doi.org/10. 1787/5k4dp59msdwk-en. National Education Association. (2010). Preparing twenty-first century students for a global society: An educator’s guide to the “four Cs.”. Retrieved from http://www.nea.org/assets/docs/A-Guideto-Four-Cs.pdf. Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic underachievement. Psychological Science, 26(6), 784–793. https://doi.org/10.1177/0956797615571017. Pianta, R. C., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher, 38(2), 109–119. https://doi.org/10.3102/0013189x09332374. Torrance, E. P. (1972). Torrance tests of creative thinking. Bensenville, Ill: Scholastic Testing Service. Wallace, T. L., Kelcey, B., & Ruzek, E. (2016). What can student perception surveys tell us about teaching? Empirically testing the underlying structure of the tripod student perception survey. American Educational Research Journal, 53(6), 1834–1868. Zhao, Y. (2012). World class learners: Educating creative and entrepreneurial students. Corwin Press.