Feedback in Online Course for Non-Native English-Speaking Students is an investigation of the effectiveness of audio and
184 108 1MB
English Pages 158 Year 2013
Feedback in Online Course for Non-Native English-Speaking Students
Feedback in Online Course for Non-Native English-Speaking Students
By
Larisa Olesova
Feedback in Online Course for Non-Native English-Speaking Students, by Larisa Olesova This book first published 2013 Cambridge Scholars Publishing 12 Back Chapman Street, Newcastle upon Tyne, NE6 2XX, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2013 by Larisa Olesova All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-4438-4223-0, ISBN (13): 978-1-4438-4223-5
To my Father ɉɚɦɹɬɢ ɞɨɪɨɝɨɝɨ ɨɬɰɚ ɂɜɚɧɨɜɚ Ⱥɥɟɤɫɟɹ ȼɚɫɢɥɶɟɜɢɱɚ (1936-1990)
TABLE OF CONTENTS
List of Tables.............................................................................................. ix List of Figures............................................................................................. xi Foreword .................................................................................................. xiii Acknowledgements ................................................................................... xv List of Abbreviations ............................................................................... xvii Chapter One................................................................................................. 1 Introduction Problem Statement Rationale Purpose of the Study Research Questions Significance of the Study Chapter Two ................................................................................................ 7 Review of the Literature Literature Review Methodology Nonnative and Native English-Speaking Teachers Feedback Feedback in Asynchronous Online Environments Feedback in Writing Audio Feedback Summary Chapter Three ............................................................................................ 47 Methods Overview Theoretical Framework Participants and Sampling Method Research Design Dependent Variables
viii
Table of Contents
Independent Variables Procedure Reliability and Validity Data Analysis Threats to Validity Chapter Four.............................................................................................. 67 Results Overview Missing Data Analysis Results for Research Questions One and Two Results for Research Question Three Chapter Five .............................................................................................. 89 Discussion, Implications, Recommendations, Future Research, Limitations, and Conclusion Research Questions One and Two Research Questions Three and Four Implications and Recommendations Future Research Limitations Conclusion References ............................................................................................... 101 Appendix A ............................................................................................. 117 A Demographic Survey and the Audio Feedback Survey to Examine Students’ Responses to Audio and Text-Based Feedback (Ice 2008) Appendix B.............................................................................................. 121 Informed Consent Form Appendix C.............................................................................................. 125 The Articles and the Questions Used during the Experimental Study Appendix D ............................................................................................. 133 Bar Graphs of the Results on Audio Feedback Survey by the Instructors’ Language Background and the Participants’ Levels of Language Proficiency
LIST OF TABLES
Table 2- 1 Control Groups Studies Claiming WCF Improves Accuracy (Bitchener and Knoch 2008, 412) Table 2- 2 Studies without Control Group Predicting WCF Improves Accuracy (Bitchener and Knoch 2008, 414) Table 2- 3 Types of Teacher Written Corrective Feedback (Ellis 2009, 98) Table 2- 4 General differences between oral, written and e-feedback (Tuzi, 2004) Table 3- 1 Assumptions of Constructivism and Suggested Use of Feedback (Jonassen 1991) Table 3- 2 Participants’ Demographics Table 3- 3 The Scoring Rubric Table 3- 4 Example of Feedback Provided by the Instructors (NNEST and NEST) Table 3- 5 Sample of the Student’s Weekly Responses and the Type of Question Table 4- 1 Results for Participation in Online Course across the NNEST/NEST Groups and Participants’ Language Proficiency Level Table 4- 2 Results for Non-Participation by Instructor’s Language Background and Participants’ Language Level Table 4- 3 Results of Logistic Regression for Non-Participation Table 4- 4 Results of Logistic Regression for Non-Participation based on the TOEFL Score Table 4- 5 Quality of Online Posting Scores for NEST and NNEST across Three Time Periods Table 4- 6 Quality of Online Posting Scores for High and Low Level of Language Proficiency across Three Time Periods Table 4- 7 Pre and Post Course Survey Results (n=19) Table 4- 8 Results of Survey Individual Items (n=55) Table 4- 9 Results of Survey Items by Instructors’ Language Background Table 4- 10 Results of Survey Items by Participants’ Levels of Language Proficiency Table 4- 11 Two-Way Between-Groups ANOVA: Effect of Instructors’ Language Background and Participants’ Levels of Language Proficiency
LIST OF FIGURES
Figure 3- 1 An experimental design with one within-subject factor and two between-subjects factors Figure 3- 2. The relationships between the independent variables and dependent variables Figure 3- 3. The data analysis diagram Figure 4- 1.The frequency of the students who did not complete the course by the instructors’ language background and levels of language proficiency Figure 4- 2. Means scores change in quality of posting by the type of feedback Figure 4- 3. Means scores change in quality of posting by the type of feedback and instructors’ language background Figure 4- 4. Means scores change in quality of posting by the type of feedback and the levels of language proficiency 77 Figure 4- 5. Means scores change by instructors’ language background and by the level of language proficiency for text feedback and audio feedback Figure 4- 6. Frequency distributions of the average audio feedback perception by instructors’ language background Figure 4- 7. Frequency distributions of the average audio feedback perception by participants’ level of language proficiency Figure 4- 8. Plot of interaction between levels of language proficiency and instructors’ language background on participants’ perceptions
FOREWORD
This study examined the effect of asynchronous embedded audio feedback on nonnative-English-speaking students or so-called English as a Foreign Language (EFL) students’ higher-order learning and perceptions of the audio feedback versus text-based feedback when the students participated in asynchronous online discussions. In this study, the term “EFL” was used to imply the use of English in a community where it is not the primary means of communication (Asher and Simpson 1994, 1120). The term “foreign language” refers to the language that is not a native language in a country (1120). However, this study also used the term English as a Second Language (ESL) to refer to a non-native language that is widely used as a medium of education, government, and business (1120). In addition, this study examined how the impact and perceptions differed when the instructor providing the feedback was a nonnative English-speaking teacher (NNEST)1 versus native English-speaking teacher (NEST) (Pasternak and Bailey 2004, 156). A quasi-experimental design was used with audio feedback and text-based feedback as a withinsubject factor, instructors’ language background (NNEST and NEST) and students’ level of language proficiency (high and low) as the betweensubjects main factors. The students were assigned to the levels of language proficiency (high and low) and two types of instructors (NNEST and NEST), but all of them experienced audio feedback and text-based feedback. To accomplish this, an examination of the students’ weekly online postings across the three time periods (pretest, posttest 1, and posttest 2) and the perceptions of the technique were carried out. Two instruments were used to examine the effect of embedded audio feedback (a) the scoring rubric (Ertmer and Stepich 2004, under “Learning Outcomes”), and (b) the audio feedback survey to examine students’ responses to audio and text-based feedback (Ice 2008). Specifically, for this study, the EFL students’ weekly scores indicating the quality of online discussion posting 1
NEST/NNEST terminology is consistent with the literature of Teachers of English to Speakers of Other Languages Inc. (TESOL), A Global Education Association.
xiv
Foreword
for audio feedback and text-based feedback delivery methods and their perceptions on the survey were used as dependent variables. The three independent variables of this study were: (a) students’ level of language proficiency; (b) embedded audio feedback versus text-based feedback; and (c) nonnative (NNEST) or native English-speaking (NEST) instructors who were providers of feedback. The quantitative data were analyzed with descriptive statistics, logistic regression analysis, a Wilcoxon Signed Rank Test, an independent t-test, a mixed-effect ANOVA, and the two-way between-groups ANOVA. The results indicated the effectiveness of audio feedback and text-based feedback to promote EFL students’ higher-order learning and to increase perceived effectiveness of both types of feedback. The results also indicated that there were no significant differences between the groups (NNEST and NEST) and the student’s levels of language proficiency (high and low) on the increased quality of the students’ online postings and their perceptions of audio feedback. However, the effect of audio feedback on the quality of online posting was different because it depends on the students’ level of language proficiency. In this study, the students at the low level of language proficiency were more likely to drop the course and/or received the low scores on their online postings. However, the students at the low level of language proficiency perceived that the audio feedback helped them retain the course information more than the text-based feedback. Finally, the students in the NEST group perceived higher motivation and retention than the students in the NNEST group. The study has implications for instructors and designers in creating online learning environments as it relates to asynchronous online discussions that include EFL students.
ACKNOWLEDGEMENTS
My journey to the doctoral program at Purdue University began in 1998 when I was a visiting scholar at The George Washington University in Washington, D.C. At that time, I did not have any specific plans to complete a Ph.D. in the U.S. However, my life path changed dramatically in 2003 when I returned to The George Washington University as a Fulbright Scholar. I would like to start my acknowledgements by expressing my deep appreciation to all of my friends from The George Washington University and Georgetown University who guided, supported, and helped me find my true destiny in life: Dr. Christine Meloni, Dr. Donald Weasenforth, Prof. Margaret Kirkland, Dr. Susan Willens, Prof. Virginia Lezhnev, Dr. George Bozzini, and Dr. Stuart Umpleby. The love and support of my friends in Washington, D.C. encouraged me to begin a Ph.D. program in the U.S. In 2006, my dream of obtaining my Ph.D. became possible thanks to the initiative of Dr. Anatoli Rapoport, Dr. Lynn Nelson, and Dr. Andrew Gillespie from Purdue University. I am indebted to these individuals, whom I feel have changed my life for the better. Here at Purdue, I have been blessed by meeting and working with a very kind and supportive advisor, Dr. Jennifer Richardson. I am very grateful to Dr. Richardson for giving me an opportunity to find my research focus and for guiding me through all of the stages in the program. My life at Purdue has also been enlightened by having such a nice friend, Aggie Ward, who fills my days with joy and sunshine. I also feel deep appreciation for Dr. James Lehman and Dr. Timothy Newby for continuously supporting me during my study in the program. My life at Purdue has also awarded me with a great friend, Dr. Dazhi Yang, who is my role model and a life-long mentor. I have been blessed by meeting Dr. Phil Ice in 2008 when he was the guest speaker in the online course taught by Dr. Richardson. I cannot stop appreciating the moment when Dr. Ice introduced his research on audio feedback. I was curiously thinking there was no difference between the types of feedback. But, Dr. Ice sent me a file with audio feedback where he recorded his voice! That unforgettable moment changed my view on feedback in an online environment! I am very grateful for Dr. Ice’s introducing me to the research on audio feedback which eventually became the topic of my doctoral dissertation. Thanks to Dr. Richardson, I
xvi
Acknowledgements
have been fortunate to work with Dr. Luciana de Oliveira and Dr. Yukiko Maeda on my dissertation study. I greatly appreciate the patience, understanding, and support of Dr. Maeda during our work on the statistical analysis on the final results of the experiment. As I had never worked with quantitative data before, I will never forget how clearly and easily Dr. Maeda explained the most complicated statistical terminology to me. I have been amazed and I will be always amazed with Dr. de Oliveira’s enthusiasm and motivation as a person and as a researcher. Thanks to Dr. de Oliveira I was able to focus on an appropriate scope of EFL in my dissertation study. My experiment for this dissertation study would have never been realized without Adrie Koehler who provided the native English speaker voice and who also helped with editing this dissertation. I would also like to express my gratitude to Dr. Natalya Nickolayevna Alexeeva from Yakutsk (Russia) who helped with recruiting the participants and providing computer labs in Yakutsk for the pilot studies in 2009 and 2010 and, finally, for this dissertation study in 2011. I would like to express my gratitude to Hans Aagard and Kim Arnold from Information Technology at Purdue (ITAP) for giving me an opportunity to gain invaluable experience as a researcher and an instructional designer. The experience gained at ITAP helped me work on the data analysis for my dissertation study. This study has been completed thanks to the financial support of The International Research Foundation for English Language Education (TIRF). Finally, this dissertation has been realized thanks to the patience, understanding, and encouragement of my husband Ivan, my mom, my sister Svetlana, and my sister-in-law Alyona. I would never have been able to succeed without the support of my family and the help of my friends Ayesha Sadaf, Nadezda Pimenova, Lisette Reyes-Paulino, Constance Harris, Jessica Clayton, Andrea Meloni, Oksana Prokhvacheva, Maria Everstova and Daria Unarova.
LIST OF ABBREVIATIONS
EFL ESL NNEST NEST TF AF CMC TOEFL PBT
English as a Foreign Language English as a Second Language Nonnative English-speaking teacher Native English-speaking teacher Text-based feedback Audio feedback Computer-mediated communication A standardized test for proficiency in English as a Foreign Language Paper-based test
CHAPTER ONE INTRODUCTION
Problem Statement As online courses in U.S. higher education continue to gain popularity, students from different countries and cultures have the opportunity to study under the same virtual “roof” while remaining physically and socially within their own countries and cultures (Gunawardena and LaPointe 2007, 600; Wang 2006, 69). Specifically, globalization, internationalization, and the cultural diversity of students have influenced the issues of planning, designing, and implementing online courses across geographic boundaries (Gunawardena and McIsaac 2004, 384-85). Therefore, instructors are increasingly looking to new and more effective techniques to promote learning among their students. One technique, audio feedback has demonstrated that it can strengthen the instructor’s ability to affect learning and more personalized communication with students (Ice et al. 2007, 3). This study investigated the effectiveness of audio feedback provided for English as a Foreign Language (EFL) students. Computer-mediated communication (CMC) by removing physical barriers, and by allowing students to create, exchange, and perceive information using the Internet and the World Wide Web (WWW), facilitate collaborative learning and initiate meaningful conversation in cross-national settings (Gunawardena et al. 2001, 85). From a constructivist perspective, CMC, based on asynchronous forms of communication (i.e., asynchronous online discussions), can support students’ active learning and collaboration by engaging them in discussions to construct their own knowledge (Romiszowski and Mason 2004, 405). Asynchronous online discussions can enhance rich interactions and flexibility between students and teachers by removing transactional distance when teaching and learning occur in separate locations (Moore 2007, 89). In addition, asynchronous online discussions, by providing time to read and respond to a message, can support the possibility for greater student reflection and critical thinking (Romiszowski and Mason 2004, 424).
2
Chapter One
A number of studies have reported that asynchronous online discussions could become a beneficial way to promote critical thinking among EFL students (Biesenbach-Lucas 2003, 39; Warschauer 1997, 472). Findings have shown that EFL students have rated online interactions (i.e. sharing ideas and experiences) as a major benefit of participating in asynchronous online discussions (Weasenforth, Biesenbach-Lucas, and Meloni 2002, 74). Similarly, Gunawardena and McIsaac (2004) have argued that EFL students prefer participating in asynchronous online discussion because they understand online postings more easily than verbal discussions in face-to-face classrooms (384). Studies have also found evidence that EFL students’ asynchronous online postings could be more lexically and syntactically complex than their discussions in face-to-face classrooms because they have more time to read and reflect on asynchronous online postings (Warschauer 1997, 472; Weasenforth, Biesenbach-Lucas, and Meloni 2002, 74). However, asynchronous online discussion with its flexibility, interaction, and open communication at any time and at any place may present other drawbacks such as the lack of non-verbal cues in text-based communication (Cifuente and Shih 2001, 463). It is known that text-based online communication can cause difficulties in students’ understanding of each other, difficulties in interpreting words correctly, or in understanding culture-specific references (Gunawardena and McIsaac 2004, 385). For example, Zhao and McDougall (2008) found that EFL students perceived text-based online communication as very restrictive; they could not use body gestures or other non-verbal means for communication (69). Quinton and Smallbone (2010) have found evidence that students might have difficulties in understanding or in interpreting messages correctly (128). The researchers revealed that students need clarity of meaning to overcome misunderstanding, especially when it is associated with asynchronous text-based communication. Therefore, to summarize the above studies it can be assumed that clear meaning has become one of the crucial elements for successful online communication; it has become more crucial and vital for EFL students participating in asynchronous online discussions. Gunawardena and McIsaac (2004), in their extensive review of distance education in a cultural context, have argued that EFL students might have disadvantages participating in online discussions with those for whom English is the first language because of “linguistic difference” and “cultural otherness” (384). Similarly, Zhang and Kenny (2010) found evidence that EFL students experienced language difficulties as non-native speakers; the language barrier may lead to difficulties in understanding
Introduction
3
native speakers of English (29). Likewise, Shih and Cifuentes (2003) found that the delivery of text-based information for EFL students in the online setting could cause misunderstanding, especially when they communicated with their instructor who is a native English speaker (86). To overcome the limitations of text-based communication, research has shown the importance of the instructor’s role in facilitating online discussions for successful online learning (Anderson et al. 2001, 5; Swan 2003, 25). Indeed, the instructor’s role to provide guided instruction, encourage critical reflection, and give constructive feedback may enable students to overcome difficulties of text-based online communication (Biesenbach-Lucas 2003, 38). Yet, to increase both the verbal and nonverbal cues of asynchronous interactions, studies have proposed using asynchronous audio, specifically, instructional audio feedback (Ice et al. 2007, 18; Ice et al. 2008, under “Analysis and Conclusions”). Audio feedback, defined as a recorded message in online instruction, has been viewed as a means to overcome the lack of clarity in text-based communication. Audio feedback, when embedded in a student’s written documents, has demonstrated that it can strengthen the instructor’s ability to affect learning and to generate more personalized communication with students (Ice et al. 2007, 3). Studies on audio feedback for EFL students in face-to-face environments have examined the effect of the technique on EFL students’ writing performance to determine whether the technique could help EFL students understand their native English-speaking teacher‘s comments correctly. The studies found that audio feedback might help EFL students understand instructional feedback better than written comments in order to improve writing (Boswood and Dwyer 1995, 54; Huang 2000, 228). The research has found that audio feedback is more personal; it may help EFL students understand feedback easily because the teacher speaks directly to each student on tape, adapting tone, inflection, and explanation to the particular student. In addition, Johanson (1999) found that audio feedback complemented both the social-constructivist philosophy and the process approach; audio feedback helped EFL students make the necessary cultural adjustments to understand the academic relationships in the U.S. universities (32). Some empirical studies on the effectiveness of audio feedback in a traditional EFL writing class have examined the effects of audio feedback on students’ writing when it was provided by an instructor who is a nonnative speaker of English. The studies found that audio feedback in EFL writing courses could help students understand their writing gaps from audio feedback better than from written instructional comments
4
Chapter One
(Huang 2000, 228; Morra and Asís 2009, 77). Studies for EFL learners found that audio feedback allowed the teachers to provide suggestions that help the student to clarify the intended meaning for extended explanations of writing problems since EFL students might face problems in understanding teachers’ written comments correctly for further improvements of their drafts (Huang 2000, 209; Syncox 2003, 75). Today, due to the development of distance education and an increased number of online courses, researchers’ and practitioners’ interest in using audio feedback in asynchronous online environment has raised. In the field of distance education, research results have shown that students receiving instructional audio feedback described their experience as personal, enjoyable, complete, and clear (Kirschner, van den Brink, and Meester 1991, 185). The use of asynchronous embedded audio feedback in online courses increases retention of content and it enhances learning community interactions, and it is associated with the perception that the instructor cared more about the student (Ice et al. 2007, 13; Oomen-Early et al. 2008, 273). Conveying nuance is very important in asynchronous online discussions, as Swan (2003) explains, because real-time negotiation of meaning is impossible among instructors and students separated by space and time, making clarity of meaning even more imperative in online classes (19). Research on the effectiveness of audio feedback for EFL students in asynchronous online environments has found evidence that audio feedback helped EFL students improve their speaking and listening skills (Hsu, Wang, and Comac 2008, 192). Overall, the majority of studies have examined the effectiveness of audio feedback in asynchronous online environments when it was provided by NESTs (Hsu, Wang, and Comac 2008, 192; Ice et al. 2007, 15; Oomen-Early et al. 2008). Yet, limited research has been done to examine whether audio feedback can become an effective technique when it is provided by NNESTs in asynchronous online environments (Olesova et al. 2011a, 30). There is still limited empirical evidence whether the technique can be effective for EFL students’ learning when they are enrolled in asynchronous online courses (Ice et al. 2010, 115).
Rationale Hyland and Hyland (2006) argued that, although providing feedback for EFL students is one of the core principles for successful instruction and learning, the research literature has not been unequivocally positive about
Introduction
5
its role in writing development, and teachers often have a sense that they are not making use of its full potential (83). This may be true because EFL students still struggle to produce accurate writing in the target language, which might restrict their participation and contributions both in traditional and online discussions. Although there has been much interest in examining EFL students’ performance in asynchronous online courses in the U.S. universities, limited research thus far has been conducted on the effectiveness of audio feedback for EFL students’ learning in asynchronous online courses (Ice et al. 2010, 115). Taking into consideration that asynchronous online courses and asynchronous online discussions specifically could become an effective way to promote critical thinking skills among EFL students, it is important to note that the studies on audio feedback both in traditional and online courses have not investigated the effect of audio feedback on EFL students’ higher-order learning. Furthermore, no studies have examined the degree of asynchronous embedded audio feedback impact on EFL students’ higher-order learning when it is provided by NNEST versus NEST. The present study was an attempt to investigate the effect of embedded audio feedback in asynchronous online discussions and to shed light on the possible impact of embedded audio feedback versus text-based feedback on EFL students’ higher-order learning along with students’ perceptions of the technique. Finally, the study also examined the impact and perceptions differ by the instructors’ language background, NNEST versus NEST, among EFL students because previous studies have revealed that EFL students face problems interpreting written communication with native speakers of English which might further lead to miscommunication and can negatively impact EFL students’ online learning performance.
Purpose of the Study The purpose of this quantitative study was to examine the effect of asynchronous embedded audio feedback on EFL students’ higher-order learning. In addition, this study examined EFL students’ perceptions of the technique versus text-based feedback when students participated in asynchronous online discussions. Moreover, this study examined how the impact and perceptions differed when the instructor providing the feedback was NNEST versus NEST. To accomplish this, an examination of EFL students’ weekly online postings and the perceptions of the technique were carried out according to the level of English language proficiency. Specifically for this study, EFL students’ weekly scores indicating the quality of online discussion posting and their responses on
6
Chapter One
the audio feedback survey were used as dependent variables to measure the effectiveness of asynchronous embedded audio feedback among EFL students (Ice 2008).
Research Questions RQ1: Is there a significant difference in scores on the quality of weekly discussion posting by types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency? RQ2: Is there any interaction effect between the types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency on the scores on the quality of weekly discussion posting? RQ3: Is there a significant difference in scores on perceptions of the type of feedback delivery method by instructor’s language background and/or student’s level of language proficiency? RQ4: Is there any interaction effect between instructor’s language background and student’s level of language proficiency scores on perceptions of the type of feedback delivery method?
Significance of the Study No study has been done to examine the effects of asynchronous embedded audio feedback for EFL students when they enroll in asynchronous online courses and specifically when asynchronous embedded audio feedback is provided by a NNEST versus NEST. The significance of this study is that the majority of previous studies explained the effects of audio feedback for EFL students in a traditional face-to-face classroom and limited research has been done in asynchronous online courses. This study intended to reveal the effect of the asynchronous embedded audio feedback versus text-based feedback on the quality of weekly online discussion postings among EFL students when audio feedback was provided by a NNEST versus a NEST. In addition, this study provides evidence for the perceptions of this technique to compare with text-based feedback for EFL students when they participated in asynchronous online discussions, and if the perceptions were different when the asynchronous embedded audio feedback was provided by a NNEST versus a NEST.
CHAPTER TWO REVIEW OF THE LITERATURE
Feedback as one of the core principles of teaching practice plays a crucial role in encouraging and consolidating the learning process (Arbaugh and Hornik 2006, 4; Chickering and Gamson 1987, under “Seven Principles of Good Practice,” Hyland and Hyland 2006, 83). Feedback is defined as information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding and as information presented that allows comparison between an actual outcome and a desired outcome (Hattie and Timperley 2007, 81; Mory 2004, 745).
Literature Review Methodology The search of the literature was conducted during the past two years in 2008-2010 via ProQuest Research Library, ERIC, Purdue Library Catalog, Google Scholar, Wiley Online Library, EBSCOhost Academic Search Premier, Wilson OmniFile FT Mega Edition, and EBSCOhost Professional Development Collection to collect and organize studies which focused on the effective use of instructional audio or recorded feedback in different educational institutions and for different educational levels. The first exploratory search was conducted under the guidance of the study by Ice et al. (2007) in December 2008 and January 2009. The search resulted in the identification of the time frame for the studies completed between 1962 and 2009. The time frame was determined when the research on audio feedback became available in the field due to the introduction of the first dictating machines into teaching, both for native speakers of English and nonnative speakers of English. The second additional search was completed between spring 2009 and fall 2009 but with the focus on the effect of the audio feedback on students’ learning outcomes in different fields. The results of the two searches resulted in a total of 96 articles both practical (n=66) and empirical (n=30) between 1962 and 2009. Both searches were conducted by using keywords such as the following: feedback OR audio feedback OR native and nonnative OR EFL
8
Chapter Two
and ESL feedback OR online feedback OR computer-based feedback. The results of the last search in summer-fall 2010 were similar to the two previous searches and resulted in a total of two empirical studies between 2009 and 2010; the studies focused on students’ perceptions as well as the impact of the technique on students’ writing improvements. The studies for the literature review were selected based on their focus on the definition of audio feedback similar to this study’s understanding of the technique, i.e., recorded instructional comments on students’ work due to the limited number of studies on audio feedback. In addition, the search for studies on the nature of feedback in second language instruction that was conducted in spring 2010 resulted in the selection of a total of 70 peer-reviewed studies according to the criteria of using empirical research designs. Finally, the search for studies that focused on EFL students and asynchronous online environments was conducted in fall 2010 and resulted in the selection of a total of 33 peer-reviewed studies.
Nonnative and Native English-Speaking Teachers The issues relating to Nonnative and Native English-Speaking Teachers (NNESTs and NESTs) have been particularly important for research over the past several years (Braine 1999, 15; Medgyes 1992, 340). It should be noted here that native speaker has many contradictory definitions in the literature (Moussu and Llurda 2008, 315). In this study, the definition of a native speaker was based on someone’s native language status as the core linguistic meaning of the definition. Therefore, this study defined the first language that an individual learned to speak as the native language and the native speaker is someone for whom the first language was acquired naturally in childhood (Cook 1999, 185; Crystal 2003, 69). Similarly to the complexity of the definition native speaker in literature, the term nonnative speaker also has controversial issues (Moussu and Llurda 2008, 315). Having defined native speaker based on a person’s native language or the first language learned in childhood, this study defined nonnative speaker as someone who, in addition to the first language, uses a second language or someone judged by his/her linguistic competence. According to the linguistic proficiency continuum, it is believed that only people who were born and brought up in English-speaking environments can be considered as people having native competence (Cook 1999; 185, Crystal 2003, 69; Medgyes 1992, 340). Studies have stated that even the most advanced nonnative speakers can never reach a native competence or native-like proficiency despite all learning factors (i.e., motivation, aptitude, experience, education) and efforts (Medgyes 1992, 340). Despite
Review of the Literature
9
continuing debates on the definitions of native and nonnative speakers in the field, this study used the terms “nonnative” (NNESTs) and “native” (NESTs) in a general sense since both definitions have been widely accepted by most of the literature. The studies have paid great attention to the dichotomy between native/nonnative speakers (Liu 2009, 1; Moussu and Llurda 2008, 315). From a linguistic point of view, in the past NESTs were considered as the only reliable source of linguistic data (Chomsky 1965, 25). In 1961, a tenet was created identifying a native speaker as an ideal language teacher at the Commonwealth Conference on the Teaching of English as a Second Language (Phillipson 1992, 195). However, Phillipson (1992) called this assumption a “native speaker fallacy” claiming that NNESTs can be prepared to gain abilities that are, according to the tenet, associated with NESTs (i.e., fluency, correct usage of idiomatic expressions, and knowledge about the cultural connotations of English, 195). Similarly, Medgyes (1992) also challenged the issues of NNESTs and NESTs arguing that both stand quite close to each other and both serve equally useful purposes in their own professional terms (349). Then, in 1996, George Braine initiated the beginning of the NNESTs movement at the colloquium at TESOL (Teachers of English to Speakers of Other Languages, Inc.) to address issues, concerns and experiences with the audience. Finally, the Nonnative English Speaking Teachers (NNEST) in TESOL Caucus was established in 1998 thanks to George Braine, Jun Liu and Lía Kahmi-Stein, which allowed more research to be conducted in the area on NNEST. Accordingly, TIRF (The International Research Foundation for English Language Education) and TESOL Quarterly made the subject of NNSs a priority research topic. NNESTs have been a major research area from the establishment of the Caucus in 1999 until today (de Oliveira 2011, 229). Despite the existence of the dichotomy NNESTs and NESTs, the majority of studies have accepted this widely used term for separation of both teachers and researchers. Moussu and Llurda (2008) in their extensive review of research on NNESTs have pointed out the following issues that have been investigated in the field: 1) teacher education in EFL and ESL setting; 2) perceived advantages regarding NNESTs and NESTs in the EFL/ESL classroom; and, finally, perceptions and attitude of EFL/ ESL students and intensive English program administrators (319). However, still little is known about the relationship between EFL/ESL students’ perceptions and attitudes regarding NNESTs and NESTs pedagogical skills and students’ performance. There is the urgent need for
10
Chapter Two
more data-driven quantitative empirical studies on NNESTs (Moussu and Llurda 2008, 332). As this study was aimed at examining the effect of audio feedback provided by NNESTs and NESTs for EFL participants, it is necessary to briefly review the results of recent studies investigating students’ attitude towards NNESTs and NESTs in relation to instructors’ skills. The results have shown that students paid more attention to teachers’ professional skills than on the language background (Butler 2007, 749; Ling and Braine 2007, 265). Similarly, other studies have found that NNESTs were perceived positively for their literacy skills, capability to motivate, function as role models and understanding of learners’ difficulties while NESTs were appreciated for speaking/listening skills and cultural knowledge (de Oliveira 2011, 229; Mahboob 2003, 144). Therefore, studies have shown that experience and professional skills are important than teacher’s language background (Butler 2007, 749; Cargile 1997, 440; Pasternak and Bailey 2004, l60-61).
Feedback Gibbs and Simpson (2004-05) reviewed a wide range of studies on feedback to elaborate seven conditions under which feedback may influence students’ learning and increase academic success (3). Gibbs and Simpson’s seven conditions of feedback are: 1) sufficient feedback is provided, both often enough and in enough detail; 2) the feedback focuses on students’ performance, on their learning and on actions under the students’ control, rather than on the students themselves and on their characteristics; 3) the feedback is timely in that it is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance; 4) the feedback is appropriate to the purpose of the assignment and to its criteria for success; 5) the feedback is appropriate in relation to students’ understanding of what they are supposed to be doing; 6) the feedback is received and attended to; and 7) the feedback is acted upon by the students (16-25). The seven conditions for providing feedback identified by Gibbs and Simpson were based on the principles of effective feedback by Chickering and Gamson (1987). The principles are that students need appropriate feedback on performance to benefit from courses; students need chances to reflect on what they have learned, what they still need to know, and how to assess themselves (Chickering and Gamson 1987). Feedback is also viewed as a socially constructed process (Lea and Street 1998, 162). In a constructivist context, feedback is provided in the
Review of the Literature
11
form of discussion to help students improve learning, academic performance, and reflection (Mory 2004, 772; Quinton and Smallbone 2010, 125). Students’ reflection is at “the heart” of formative feedback; students use the feedback message to modify their own work and improve their own performance (Nicol 2006, 592). There are seven principles of feedback in relation to learner self-regulation: 1) helps clarify what good performance is (goal, criteria, and standards); 2) facilitates the development of self-assessment and reflection in learning; 3) delivers high quality information to students about their learning; 4) encourages teacher and peer dialogue around learning; 5) encourages positive motivational beliefs and self-esteem; 6) provides opportunities to close the gap between current and desired performance; and 7) provides information to teachers that can be used to help shape teaching (Nicol and Macfarlane-Dick 2006, 205). Similarly, Quinton and Smallbone (2010) stated that students need formative feedback that helps them make connections between the characteristics of their work and the way to improve their work in the future (127). Feedback in a constructivist context provides intellectual tools and serves as an aid to help learners construct their internal reality (Mory 2004, 772). Finally, further research is needed to examine how feedback functions within higher-order learning and examine how feedback functions within constructivist learning environments (Mory 2004, 777).
Feedback in Asynchronous Online Environments The role of instructional feedback in asynchronous online environments and specifically in asynchronous online discussions is the most important strategy because students participating in asynchronous online discussions feel stressed, disconnected, or left behind when they do not receive any feedback on their posting (Ertmer et al. 2007, 414). Feedback in asynchronous online discussions supports students’ success to complete the online course, i.e., the lack of feedback is viewed as one of the reasons why students drop the courses (Ertmer et al. 2007, 414). Schwartz and White (2000) specified the qualities of online feedback, such as multidimensional, nonevaluative, supportive, student controlled, timely, and specific (168-69). Further, Mory (2004) outlined the following qualities for online feedback: 1) prompt, timely, and thorough on-line feedback; 2) ongoing formative feedback about on-line group discussions; 3) ongoing summative feedback about grades; 4) constructive, supportive, and substantive on-line feedback; 5) specific, objective, and individual on-
12
Chapter Two
line feedback; and 6) consistent on-line feedback (776). In addition, Vasilyeva et al. classified the following functions of web-based feedback: 1) confirming getting the user’s response; 2) informing the user about his or her performance (how many tasks were performed, number and ratio of correct answers, time of test processing, etc.); 3) correcting the user (in the case she or he has not given a correct answer); 4) explaining (the feedback could include an explanation about the reasons the user’s answer was considered correct or guidance to the correct answers in the case of a wrong answer); 5) evaluating (for example, in the case of an answer until correct feedback); 6) motivating the user; 7) rewarding the user; and 8) attracting his or her attention (2007, 347). However, there are several problems with delivering feedback in asynchronous online discussions. The most common problem is that students are unable to understand feedback comments and to interpret them correctly (Higgins 2000, under “Discourse”; Quinton and Smallbone 2010, 128). By adapting Higgins’s framework for interpreting, Carless (2006) argued that students encounter challenges in interpreting the comments; there is an emotional process that students face while receiving feedback (220). The impact of feedback can negatively threaten students’ learning engagement (Carless 2006, 221). To understand students’ perception of the effectiveness of feedback, Poulos and Mahony (2008) in their qualitative analysis elaborated the key themes related to the effectiveness of feedback: 1) students’ perceptions of feedback were related to the individual meaning attributed to the feedback, the accessibility of lectures to provide feedback, types of feedback, feedback related to criteria and marks and comments; 2) the impact of feedback was related to timeliness, significance and first-year experience. Timeliness related to the need for feedback as early as possible; and 3) credibility of feedback was related to the students’ perceptions of the lecturers themselves. The lecturers’ ability generally and also their biases influenced the credibility of the feedback they provided (145). Indeed, to interpret feedback comments correctly, students need meaningful and frequent instructional feedback (Rossman 1999, 94); they may greatly benefit from teacher presence (Anderson et al. 2001, 13) and clear instructional comments (Biesenbach-Lucas 2003, 29). Understanding instructional feedback has become crucial in times when asynchronous online environments allow students to enroll who are geographically dispersed and who come from different cultures and countries (Shih and Cifuentes 2003, 82; Zhang and Kenny 2010, 17). It is becoming a common practice that university online courses enroll international and transnational or nonnative students; such online courses
Review of the Literature
13
are offered in countries other than those where students are located (Zhang and Kenny 2010, 19). Accordingly, recent research on university online courses has extensively examined the impact of asynchronous online discussion on nonnative students; the majority of studies have concentrated on American higher education online courses in which students were nonnative speakers of English or English as a Foreign Language (EFL) students (Biesenbach-Lucas 2003, 27). Feedback provided in asynchronous online discussions for EFL students is beneficial; receiving instructional feedback on ideas and opinions helps reduce EFL students’ feelings of isolation (Birch and Volkov 2007, 291; Weasenforth, Biesenbach-Lucas, and Meloni 2002, 73). Interestingly, research has demonstrated that participation in asynchronous online discussions helps EFL students not only develop English language skills but also helps them express ideas in their own words rather than reciting word for word from other sources (BiesenbachLucas 2003, 30; Birch and Volkov 2007, 303). However, the majority of research findings have shown that EFL students enrolled in American higher education online courses and participating in asynchronous online discussions face serious challenges related to language proficiency leading to misinterpretation of instructional comments (Olesova, Yang, and Richardson 2011b, 75; Shih and Cifuentes 2003, 87). Likewise, Zhang and Kenny (2010) found evidence that the language barrier might have prevented EFL students from contributing to online discussions as often as they would have desired (24). Moreover, their English language proficiency could limit EFL students’ ability to be productive in the online courses, e.g., the students need more time than their English-speaking peers to read and compose messages (Zhang and Kenny 2010, 25). In addition, as long as asynchronous online discussions are built on written communication on account of lack of visual and aural clues, language skills seem to be one of the most significant skills required to participate in asynchronous online discussions (Black 2005, 18). Therefore, in order to provide effective feedback for EFL students when they participate in asynchronous online discussions, it is necessary to understand the nature of feedback in second language learning.
Feedback in Writing In language learning, feedback is provided for different purposes, for example, to signal or point out an error; to elicit the correct form; to explain why the original utterance is wrong; to provide the correct form; to acknowledge the correct form; to point out an undesirable verbal habit,
14
Chapter Two
gesture, or manner; to teach a strategy for better communication; and to give general advice for a better performance (Tsutsui 2004, 378). It is widely believed that, when language learners receive any form of feedback, they improve the target language accuracy and fluency, or, as Schachter (1983) defined, feedback in a language class becomes a “nutritional need” for language learners (175); it is “knowledge of results” (as cited in Bonnel 2008, 290). Taking into consideration that the focus of this study is the effectiveness of online feedback for EFL students and that asynchronous online discussions require writing skills for successful online communication, the analysis of the literature covers the effectiveness of feedback on writing in the EFL/ESL context. Since the 1970s, research about the role of instructional feedback in English as a Second Language (ESL) has shifted the interest from examining the types of feedback to investigating successful feedback on both written and spoken language (Lynch and Maclean 2003, 20). The analysis of the literature revealed that the nature of instructional feedback in the field of EFL/ESL is different in its quality and quantity compared to the ways in which instructional feedback is provided in the context of the first language. In research on the effectiveness of feedback, Stern and Solomon (2006) mentioned an existing difference between the ways in which language teachers and non-language teachers give feedback. Language teachers state that they usually provide more feedback for both spelling and grammar correction (micro-level) and a paper’s content and organization (macro-level) than do non-language teachers who tend to provide feedback mostly at a macro-level (30). The majority of studies on the effectiveness of instructional feedback in the ESL context focus on examining the effectiveness of the ways to provide corrective feedback or feedback on form (Bitchener 2009; Ellis 2009; Gascoigne 2004, 72; Rahimi 2009; Saito 1994; Sheen, Wright, and Moldawa 2009; Truscott and Hsu 2008; Yoshida 2008). All of the researchers who compare instructional feedback and peer feedback in ESL conclude that teachers’ comments are more effective than peer review is and that students prefer to receive instructional feedback (Connor and Asenavage 1994, 267; Tsui and Ng 2000, 166). According to Bitchener and Knoch (2009a), it is assumed by ESL teachers that corrective feedback or feedback on linguistic forms helps students to acquire and demonstrate mastery in the use of targeted linguistic forms and structures (329). In addition, ESL teachers usually perceive themselves as language instructors rather than writing teachers (Zamel 1985, 86). However, since the 1990s, the focus of educators in
Review of the Literature
15
ESL has switched to feedback on content, recognizing that ESL learners also need constructive feedback on the content of their performance. This literature review on how and when to provide feedback in an EFL/ESL environment will focus only on the nature of instructional written and oral feedback on writing for EFL/ESL learners, specifically, for EFL students in order to generalize this study’s findings.
Written Feedback Most studies on written feedback in ESL focus on error correction or corrective feedback (Bitchener 2009; Ellis 2009; Rahimi 2009; Saito 1994; Sheen, Wright, and Moldawa 2009; Truscott and Hsu 2008; Yoshida 2008). The debates about the effectiveness of providing corrective feedback in L2 writing took place in the field over several years (Bitchener and Knoch 2009b; Robb, Ross, and Shortreed 1986; Truscott and Hsu 2008; Truscott 1996; Yoshida 2008). Truscott (1996) in the review case against grammar correction in ESL writing classes argued that providing corrective feedback on student writing was ineffective and harmful, and, therefore, should be abandoned in the second language writing classroom (327). However, Truscott (1996) analyzed foreign languages other than English including German and Spanish and his results cannot be used for generalization in the field of EFL (334). To analyze the effectiveness of corrective feedback in EFL, Truscott (1996) relied only on a few studies; for example, he stated that Cohen and Robbins (1976) found that correction did not have any significant effects on students’ errors (335). Another study reviewed by Truscott was carried out by Robb, Ross, and Shortreed in1986 who also did not find any significant differences in students’ writing ability among 134 Japanese college freshmen in an English composition class (331). Finally, Truscott (1996) analyzed positive outcomes in the study by Fathman and Whalley conducted in 1990 (339). He argued that, even though the researchers found that teachers’ comments helped students to write a better final draft, it was not clear whether their students would become better writers in the future. Therefore, the few studies used for the analysis make an overall generalization of Truscott’s findings regarding whether corrective feedback should be provided for EFL students very problematic. In addition, the reviewed studies used different research designs and methodology that make acceptance of Truscott’s statement difficult. In contrast to the negative findings for the effectiveness of corrective feedback by Truscott (1996), the majority of studies found using corrective
16
Chapter Two
feedback in ESL is helpful for learners to improve accuracy in their writing (Bitchener and Knoch 2009b, 210; Chandler 2003, 279). The studies have used two terms: direct (explicit) and indirect (implicit) corrective feedback (Ferris and Hedgcock 1998, 206). The two terms have not always been used consistently in the literature, direct feedback may be defined as the provision of the correct linguistic form by the teacher to the student and indirect feedback may defined as “when the teacher indicates in some way that an error has been made – by means of an underline, circle, code, or other mark – but does not provide the correct form, leaving the student to solve the problem” (Ferris 2006, 83). Table 2-1 and Table 2-2 show Bitchener and Knoch’s analysis of previously presented studies on written corrective feedback (WCF) (2008, 412). The tables show the type of feedback provided during experimental studies is the major factor that ensures the study’s external and internal validity. The parameters such as the proficiency level of the population examined, elicitation task, incentive students receive for participating in the experiment, and feedback techniques usually need to be carefully controlled in experimental research on the feedback efficacy (Guénette 2007, 45).
Participants 72 ESL learners(intermediate) USA college
60 Spanish learners (intermediate) USA college
65 ESL learners USA university
72 ESL learners USA college
Study Fathman & Whalley 1990
Kepner 1991
Polio, Fleck, & Leder 1998
Ferris & Roberts 2001
Error correction; editing instruction; text revision. Control Indirect underlining &coding; Indirect underlining; Control
WCF type Indirect underlining; Content comment; Content comment & indirect underlining Control Direct error correction; Control
1semester
7 weeks
1semester
Duration A few days
Yes Groups 1 and 2 outperformed group 3
No
No
Effective Yes Groups 1 and 3 outperformed groups 2 and 4
17
New texts not measured; text revision only
No pre-test measurement; No control over journal entry length; No control over texts written out-of-class Different instruments in post-test (journal entry v in-class essay)
Limitations New texts not measured; text revision only; Not longitudinal; Focus on all errors
Table 2- 1 Control Groups Studies Claiming WCF Improves Accuracy (Bitchener and Knoch 2008, 412)
Review of the Literature
75 ESL learners (low intermediate) New Zealand language schools
177 ESL learners (intermediate) US community college
Sheen 2006
Participants 50 EFL learners Japan university
Bitchener 2008
Ashwell 2000
Study
Table 2-1 Cont.
18
Written direct correction; Written direct meta-linguistic; Control
Direct error correction; written & oral metalinguistic explanation; Direct error correction; Control
WCF type Content comments then indirect underlining & coding; Indirect underlining & coding then content comment; Mix of (1) & (2); Control
Chapter Two
8 weeks
8 weeks
Duration 1semester
Yes
Yes
Effective Yes Accuracy gains for groups 1-3 in draft 3
Limitations New texts not measured; text revision only. Effect of intervention variables possible
19
Chandler 2000
Ferris, Chaney, Komaru, Roberts, & McKee 2000 30 ESL learners USA college
47 ESL learners USA university 92 ESL learners USA university
Ferris 1997
Ferris 1995
Participants 60 German FL learners (intermediate) USA university 30 ESL learners USA university
Study Lalande 1982
Teacher commentary & selective indirect underlining Mix of direct, indirect (coded & uncoded); notes (marginal & end-oftext); text revision. (1) Indirect underlining & revision (2) Indirect underlining only
Selective indirect underlining
WCF type (1) Direct error correction (2) Guided learning and problem solving
1 semester
1 semester
1 semester
1 semester
Duration 10 weeks
Improvement 81% accurate revision by end of semester Improvement Group 2 reduced errors by one third in essay 5
Improvement but inconsistent in some error categories and essays Improvement
Effective Improvement Group 1 outperformed group 2 in post-test
Table 2- 2 Studies without Control Group Predicting WCF Improves Accuracy (Bitchener and Knoch 2008, 414)
Review of the Literature
Chapter Two
20
Later, Ellis (2009) presented a valuable typology of teacher options for correcting linguistic errors in students’ written work (Table 2-3). He stated that the typology is not only valuable for the design of experimental studies but it can also assist descriptive research. After reviewing studies on corrective feedback, Ellis (2009) stated that the search for the effective way for written corrective feedback may be mistaken if it is accepted that corrective feedback needs to take into account the specific institutional, classroom, and task contexts (106). Ellis concluded that a sociocultural perspective on corrective feedback would emphasize the need to adjust the type of feedback offered to suit the students’ stage of development although how this can be achieved practically remains unclear in the case of written corrective feedback (106). Table 2- 3 Types of Teacher Written Corrective Feedback (Ellis 2009, 98) Type of CF A Strategies for providing CF 1 Direct CF 2 Indirect CF a Indicating + locating the error b Indication only
Description
Studies
The teacher provides the student with the correct form. The teacher indicates that an error exists but does not provide the correction. This takes the form of underlining and use of cursors to show omissions in the student’s text.
e.g. Lalande (1982); Robb et al. (1986).
This takes the form of an indication in the margin that an error or errors have taken place in a line of text.
Various studies have employed indirect correction of this kind (e.g., Ferris and Roberts 2001; Chandler 2003). Fewer studies have employed this method (e.g. Robb et al. 1986).
Review of the Literature
21
Table 2-3 Continued Type of CF 3 Metalinguistic CF a Use of error code
Description The teacher provides some kind of metalinguistic clue as to the nature of the error. Teacher writes codes in the margin (e.g. ww= wrong word; art = article).
b Brief grammatical descriptions
Teacher numbers errors in text and writes a grammatical description for each numbered error at the bottom of the text. This concerns whether the teacher attempts to correct all (or most) of the students’ errors or selects one or two specific types of errors to correct. This distinction can be applied to each of the above options. Unfocused CF is extensive.
4 The focus of the feedback
a Unfocused CF b Focused CF 5 Electronic feedback
6 Reformulation
Focused CF is intensive. The teacher indicates an error and provides a hyperlink to a concordance file that provides examples of correct usage. This consists of a native speaker’s reworking of the student’s entire text to make the language seem as nativelike as possible while keeping the content of the original intact.
Studies
Various studies have examined the effects of using error codes (e.g. Lalande 1982; Ferris and Roberts 2001; Chandler 2003). Sheen (2007) compared the effects of direct CF and direct CF + metalinguistic CF. Most studies have investigated unfocused CF (e.g. Chandler 2003; Ferris 2006). Sheen (2007), drawing on traditions in SLA studies of CF, investigated focused CF.
Milton (2006).
Sachs and Polio (2007) compared the effects of direct correction and reformulation on students’ revisions of their text.
Chapter Two
22
Table 2-3 Continued Type of CF B Students’ response to feedback
1 Revision required
2 No revisions required a Students asked to study corrections b Students just given back corrected text
Description For feedback to work for either redrafting or language learning, learners need to attend to the corrections. Various alternatives exist for achieving this.
Studies
A number of studies have examined the effect of requiring students to edit their errors (e.g. Ferris and Roberts 2001; Chandler 2003). Sheen (2007) asked students to study corrections. A number of studies have examined what students do when just given back their text with revisions (e.g. Sachs and Polio 2007). No study has systematically investigated different approaches to revision.
Studies found that effective indirect feedback helps ESL students engage in guided learning and problem solving that promotes reflection, noticing and attention (Ferris and Roberts 2001, 177; Lalande 1982, 141). In addition, indirect feedback helps students to make progress in accuracy over time more than direct feedback does; indirect feedback resulted in the production of fewer initial errors (Ferris and Roberts 2001, 171). Studies also found that indirect corrective feedback may be provided in one of four ways: underlining or circling the error; recording in the margin the number of errors in a given line; showing where the error has occurred; and using a code to show what type of error it is (Ferris and Roberts 2001, 177; Robb Ross, and Shortreed 1986, 85).
Review of the Literature
23
Bitchener and Knoch (2008) conducted an experiment to investigate the effect of different types of feedback for migrant and international students (418). They found that students who received direct corrective feedback, written and oral meta-linguistic explanation, outperformed those who did not receive WCF. The level of accuracy was retained over seven weeks and there was no difference in the extent to which migrant and international students improved the accuracy of their writing as a result of WCF. The purpose of their two-month pre-posttest experimental study was to investigate: 1) the extent to which targeted WCF on student’s writing results in improved accuracy, 2) whether there is a differential effect on accuracy for different WCF options, and 3) whether there are differences in the extent to which migrant and international students improve the accuracy of their writing as a result of WCF. The international students in the study (33 males and 42 females) were predominately from East Asian countries. The migrant students (21 males and 48 females) were from a wide range of backgrounds (Bitchener and Knoch 2008, 419). In addition to the reviews in the tables, it is necessary to note that when Ferris and Roberts (2001) examined 72 university students during their experimental study, they found that indirect feedback helped the students’ self-editing with success ratios ranging from 47% (sentence structure) to 60% (articles) (161). The researchers also did a statistical reanalysis of the data, combining treatable and untreatable errors. Treatable errors include verbs, noun endings and articles. Untreatable errors include the word choice and sentence structure (Ferris and Roberts 2001, 173). The reanalysis showed that there was a statistically significant difference in students’ ability to edit treatable and untreatable error types. However, at the same time, Ferris and Roberts (2001) found that 31% of EFL students in their study preferred direct feedback by having the teacher correct all errors for them and only 19% preferred indirect feedback by having errors marked but not labeled (173). Rahimi (2009) investigated the effects of indirect corrective feedback for 56 Iranian EFL students during a four-month period (219). The results of the study did not show a significant effect for the teachers’ feedback, but they showed a main effect for practice and the interaction of practice. Rahimi (2009) ran two independent t-tests to show whether the difference between the error means of the first essays of the corrective feedback group and no feedback group and that of their last essays was significant or not (226). The second t-test showed that there was a significant difference between the last essays written by the two groups. The results indicated that interaction feedback with practice had helped the corrective feedback group make their improvement over time. Rahimi’s (2009)
24
Chapter Two
findings contradicted those of Bitchener, Young, and Cameron (2005) which did not find a significant difference between the correction and nocorrection groups (229). Another study conducted in an EFL environment was done by Li and Lin (2007), who reported the results of the impact of revision and teacher indirect feedback for 93 Chinese EFL college students (230). The research revealed a beneficial role of revision and teacher feedback in promoting the formal accuracy for Chinese EFL university student writers. Li and Lin’s (2007) study clearly shows that receiving teacher feedback without the engagement of revision tasks is not effective in facilitating accuracy in a classroom because a teacher’s feedback is expected to be an important component of English instruction (236). In addition to the studies that examined direct and indirect feedback, other studies investigated focused and unfocused corrective feedback. Sheen, Wright, and Moldawa (2009) examined focused and unfocused corrective feedback on the acquisition of English articles (556). Focused feedback is usually directed at a single linguistic feature. Unfocused feedback is defined as the traditional approach to correcting written errors in students’ writing (Sheen, Wright, and Moldawa 2009, 557) The researchers employed a quasi-experimental design with a pretesttreatment-posttest-delayed posttest design using six intact adult intermediate classes totaling 80 students. They concluded that focused written error correction directed at indefinite (first mention) and definite (second mention) article errors resulted in greater accuracy than unfocused correction directed at a range of grammatical errors. Their final results, that focused CF is more effective than unfocused CF, differed from those of Ellis et al (2008). Ellis et al. did not find significant differences in the effects of focused and unfocused CF, with both proving to be more effective than no correction in a delayed posttest (Sheen, Wright, and Moldawa 2009, 565). Contrary to the debates on corrective feedback in ESL writing and to the overwhelming majority of research studies on corrective feedback effectiveness, there are also studies that compared the effectiveness of feedback not only on form but also on content in ESL writing (Ashwell 2000, 227; Chiu and Savignon 2006, 97; McGarrell and Verbeem 2007, 228). The studies compared feedback on form with feedback on content and found positive results for feedback on content. However, the studies, by using only one type of feedback on form, have one limitation in that it cannot be assumed that all types of feedback on form are equal (Guénette 2007, 47).
Review of the Literature
25
Chiu and Savignon (2006) examined the indirect content-based feedback followed by corrective feedback for EFL students (97). The researchers found that indirect or question-form comments helped create an informal atmosphere between teacher and students. Later, McGarrell and Verbeem (2007) found that context-based feedback might motivate students to look beyond surface errors to develop writing and communicative skills (235). The researchers stated that feedback on content helps students consider their intended meanings at a deeper level of engagement with their texts. Furthermore, McGarrell and Verbeem (2007) continued that content-based feedback focuses on the deeper meaning in writing; it provides suggestions for how the students can elucidate the meaning in the text (231). Fathman and Whalley (1990) revealed that feedback on grammar and content, whether given alone or simultaneously, both had a positive effect on revision (185). Even though Fathman and Whalley (1990) found positive results for feedback on content, Gascoigne (2004) did not find any meaningful data concerning the effect of content feedback except some general positive teachers’ comments when conducting research in beginning EFL composition classes (75). Overall, the analysis of literature, whether to use direct or indirect corrective feedback in EFL writing, does not offer a clear conclusion because some studies have reported an advantage of indirect feedback and others have reported no differences between the two types (Bitchener and Knoch 2008, 425; Ferris and Roberts 2001, 161; Lalande 1982, 140; Robb, Ross, and Shortreed 1986, 83; Semke 1984, 195).
Oral Feedback and Electronic Feedback Oral feedback on EFL students’ writing usually takes the forms of tutoring and teacher–student face-to-face conferences. However, less research has been conducted on the effectiveness of oral feedback in EFL writing compared to the research available on the effectiveness of written feedback in EFL writing (Nakamaru 2008, 1; Thonus 2004, 227). Blau and Hall (2002) suggested that tutors begin with the grammatical and other local issues because non-native-English-speaking writers commonly request them to check grammar (35). Thonus’s research results suggest that tutors who are native speakers of English should be more directive as cultural and language informants when they work with nonnative speakers. Continuing the comparison with the studies on written feedback, it is necessary to note that the majority of studies on oral feedback in EFL
26
Chapter Two
(Bitchener, Young, and Cameron 2005, 191; Truscott 1999, 437) also focus on error correction more than on the content of writing. Similar to the case with written feedback in EFL, there were also debates about whether to provide oral corrective feedback, starting with Truscott (1999) in his study “What’s wrong with oral grammar correction” (437). Truscott (1999) stated that oral correction poses overwhelming problems for teachers and for students; research evidence suggests that it is not effective; and no good reasons have been offered for continuing the practice (448). Finally, Truscott (1999) suggested that oral corrective feedback should also be abandoned as in the case of written feedback discussed above (453) However, other studies on the effectiveness of oral feedback in EFL found positive results (Bitchener, Young, and Cameron 2005, 201; Lynch and Maclean 2003, 26). For example, Bitchener, Young, and Cameron (2005) examined the effectiveness of types of feedback for ESL learners (191). They found that direct oral feedback in combination with direct written feedback enabled ESL students to improve their accuracy in writing compared to direct written feedback alone. When the role of technology in society began to grow, more studies on feedback in EFL shifted their focus to the effectiveness of using technology to provide written and oral feedback. Tuzi (2004) defined electronic feedback (e-feedback) as a new form of feedback in digital, written form and transmitted via the web that “transfers the concepts of oral response into the electronic arena” (217). In a study about the impact of e-feedback on the revisions of ESL writers in an academic writing course, Tuzi (2004) summarized the basic differences between oral, written, and electronic response (219, Table 2-4). In addition to the differences of e-feedback summarized by Tuzi (2004), Hyland and Hyland (2006) found that one of the major advantages of electronic feedback is that comments are automatically stored for later retrieval, allowing instructors to print out the transcripts for in-class discussion (93). However, Sauro (2009) found that neither of examined two different types of computer-mediated corrective feedback was effective in the immediate form or over time while both supported gains in target form knowledge in familiar contexts (113). It is still not clear if written feedback as well as oral feedback for EFL students can be an effective tool. Educators in the field of EFL have continued to look for other productive means to provide effective feedback as an alternative to written and oral feedback. One of the reasons to look for a more effective way to provide feedback in EFL is to save teachers’ time and at the same time to deliver effective and personal feedback. Patrie (1989) pointed out that written feedback on a student’s paper is not
Review of the Literature
27
only time-consuming but also is less personal and more distant than direct contact (87). While oral writing conferences are more personal than written feedback, oral feedback can also cause problems; for example, EFL students are usually not able to take complete notes on further suggested modifications during the writing conferences which can lead to inconsistency in the further improvement in students’ writing (Patrie 1989, 87). One technique, audio feedback, has demonstrated that it can strengthen the advantages of both written and oral types of feedback and impact EFL students’ learning. Table 2- 4 General differences between oral, written and e-feedback (Tuzi 2004, 219) Oral feedback Face-to-face Oral Time dependent Pressure to quickly respond Place dependent Nonverbal components More personally intrusive Oral/cultural barriers Greater sense of involvement Negotiation of meaning Less delivery effort N/A
Written feedback Face-to-face/distant Written Depends Pressure to respond by next class Depends No nonverbal components Depends
E-feedback More distant Written Time independent No pressure to respond quickly Place independent No nonverbal components More personally distant
Written/cultural barriers Greater sense of involvement Negotiation of meaning
Written/cultural barriers Greater sense of anonymity Less negotiation of meaning Less delivery effort Cut & paste
Greater delivery effort No cut & paste
Audio Feedback Audio feedback has been viewed as one of the most effective techniques in providing successful feedback for students because it may encourage teachers to deliver more feedback on content in comparison to written feedback. Audio feedback on writing has been defined as the instructor’s tape-recorded comments including suggested changes to students’ written drafts. The first research on so-called tape-recorded
28
Chapter Two
comments has described using audio feedback in the first language writing in different fields; they have described why and how teachers used audio feedback and why they recommended this approach. The empirical studies and practical ideas from teachers about using audio feedback in different fields in traditional writing class have examined the effectiveness of the technique in providing students with in-depth evaluations of their papers in comparison to traditional marginal written comments (Berner, Boswell, and Kahan 1996; Carson and Tasney 1978; I. Clark 1985; T. Clark 1981; Cryer and Kaikumba 1987; Harris 1970; Hays 1978; A.J. Hunt 1989; R.A. Hunt 1975; Hurst 1975; Kahrs 1974; Keller 1999; Klammer 1973; Klose 1999; Lumsden 1962; McGrew 1969; Mellen and Sommers 2003; Miller 1973; Moxley 1989; Pearce and Ackley 1995; Petite 1983; Rubens 1982; Sipple 2007; Sommers 2002; Stratton 1975; Straub and Lunsford 1995; Takemoto 1987; Tanner 1964; Vogler 1971;Yarbro and Angevine 1982; Zak 1990). Tanner (1964) found many disadvantages of using dictating machines for grading students’ writing, i.e., cost, time, and lack of funds (362). Nevertheless, Tanner (1964) found the method effective for papers that need more complex response (363). Harris (1970) implemented the use of the tape recorder for conventional grading of students’ writing at the University of Texas in 1961 and later at Brigham Young University (1). Interestingly, Harris in his review stated that Cohen tried using a voice writer nearly twenty years ago, and then Lumsden (1962) used recorded comments in combination with transcribed written notes (223). Further, Harris stated that Reeves (1963) reported on using a tape recorder with critiques for all students on the same tape which might have led to student trauma because everyone heard what the teacher was saying about everyone’s paper (1). Harris (1970) discussed Kallsen at Stephen F. Austin College’s project sponsored by HEW’s Office of Education which did not prove previous positive conclusions on the use of recording devices because the device was not well adapted for the project (1). Harris (1970) used audio feedback by making check marks in the margin and underlining words or phrases that he wanted to talk about (2). At the same time, Harris routinely marked spelling and other simple mechanical errors (2). After finishing a paper, Harris dictated his overall reaction to the paper and comments on the specific problems of the text which related to larger elements than mechanical errors (2). Harris (1970) stated that, along with problems such as costs, facility availability for listening, time commitment, technical problems such as no records on the tape, and the teacher’s adaption to the machine, the technique allowed the teacher to comment far more fully on diction, clarity, organization, and coordination
Review of the Literature
29
of graphic aids; to explain why it is wrong and what connotations are associated with the word; and then to suggest several alternatives (3). Harris (1970) also found that the technique could achieve a very personal contact with the student because the teacher could use tone of voice to make clear what was wrong and what the problem was (3). Other conceptual studies in the 1970s and 1980s have also examined the time commitment for audio feedback and for written feedback during paper evaluation with the use of dictation machines (I. Clark 1985; T. Clark 1981; Hays 1978; A. Hunt 1989; R. Hunt 1975; Hurst 1975; Kahrs 1974; Klammer 1973; Logan et al. 1976; Miller 1973; Moxley 1989; Petite 1983; Rubens 1982; Sommers 1989; Stratton 1975; Vogler 1971). These practical studies for teachers found that writing teachers could save half of their marking time by using dictation machines to grade students’ papers. For example, Vogler (1971) graded students’ writing by using a cassette tape and found that cassette evaluation provided a more efficient use of grading time than marginal comments (72). Vogler (1971) simply mentioned the number of the line where students could listen to the recorded critiques, for example: “Karen, I’d like to talk to you a little bit here about your theme entitled…. You have some strong and angrysounding images such as violent wind in line two and battled in line two and frantically in line three, and all of these images do a great deal to create a fine piece of writing. One thing I would like to point out, Karen, is I think you overdid a little the-ing sound in line six and seven” (73). More serious problems were circled and underlined in the text in red ink along with recorded critiques. Even a cassette tape recorder required no more time than traditional marginal comments; the researcher found that the process was more enjoyable than writing marginal comments. Furthermore, examining the time commitment for audio feedback, Klammer (1973) found that the instructor can say much more and can say it more clearly. The researcher argued that a person can speak at least five times faster than he can write, which means that he can convey more information in the same amount of time. This, in turn, may mean that the student no longer feels the need to corner his teacher for so many personal explanations of trivial matters (Klammer 1973, 179). Accordingly, Kahrs (1974) found that in reality using a cassette tape recorder was time consuming for the teacher but the technique was considered as a highly stimulating way to be a facilitator of change because the technique could increase the individual and personal contact between the learner and the teacher (161). Logan et al. (1976) investigated the use of audiotape cassette feedback in order to convey the maximum of information with a minimum of
30
Chapter Two
instructor effort to improve student performance (38). In this study, the students in a dental anatomy laboratory at the University of Iowa received traditional or audiotape cassette feedback following six practical exams. The researchers found that students’ exam scores increased after using audiotape cassette feedback; however, there was no indication if student performance was improved. Likewise, Hays (1978) used tape cassettes to evaluate student compositions where the researcher gave each student detailed feedback about the composition including explanations on outright mistakes, syntactical, stylistic, and rhetorical weaknesses together with suggestions for remedying these trouble spots (4). Hays (1978) found that taped evaluations increased students’ learning, as manifested in their improved writing compared to levels achieved in the previous term (6). Interestingly, in two studies following that of Klammer (1973), Hunt (1975) and Moxley (1989), by anticipating saving time while grading students’ papers, discovered some advantages of this technological device in the classroom. Hunt (1975) argued that, although the method did not save time, it allowed for extensive instructional comments, for reaching the level of communication between a student and a teacher, for a real audience for the student’s paper, and for a change of the atmosphere in the new relationship between a writer and a reader (583). Moreover, Moxley (1989) also found taped evaluation of students’ papers a powerful and effective way to respond to student writing in order to achieve the goal – to spend the same amount of time for meaningful feedback (8). Following Klammer’s remarks, Moxley (1989) found that taped comments could broaden learning because teachers could use time more effectively (9). The empirical studies on audio feedback in the 1970s and 1980s focused on how audio feedback could provide teachers with opportunities for individualized, effective, and understandable instruction to observe improved writing skills among native speakers of English in composition classes in different fields in different educational institutions (Carson and McTasney 1978; Coleman 1972; Cryer and Kaikumba 1987; McGrew 1969; Moore 1977; Yarbro and Angevine 1982). McGrew (1969), in the first empirical study in the 9th, 10th, 11th and th 12 grades at Lincoln East High School, found advantages in the use of dictating machines (Dictaphones) for the evaluation of student papers; the technique could increase individualization of instruction and continuous evaluation (1). McGrew (1969) conducted an experiment where the control groups were evaluated by marginal comments on their papers, and the experimental groups were evaluated with the dictating machines (7). Both groups had a pre-test to write a three-page paper in class. The pre-test papers were classified into three ability groups as “red-above average,”
Review of the Literature
31
“white-average,” and “blue-below average” to compare the improvement factors (1 point for no visible improvement; 3 points for some noticeable improvements, and 5 points for considerable improvements) of the three groups between experimental and control groups as well as classes as a whole. Then, students wrote nine equivalent papers where each third paper was to be done in class. The ninth paper constituted the comparison paper or post-paper to compare for improvement with the first paper; the ninth paper was written on a similar topic as the first pre-paper. Feedback was provided by following the four categories of Dr. Rice from the Nebraska Curriculum Development Center which included content (defined as logic, coherency, material covered, and degree of understanding), mechanics (defined as punctuation, capitalization, spelling and paragraphing), diction (defined as word choices, originality), and expression (defined as awkward sentences, sentence structure, wordiness, tone, attitude, perspective and redundancy). Audio feedback was provided by a teacher who spoke the following observations or corrections into the Dictaphone, “line three – you need another n in connected” or “line twelve – good introductory sentence” or “John, your use of alliteration is excellent, but you use it too often. You need to get right to the point in this paragraph. It tells the reader what your story is all about” (3). The researcher found that, although the differences between the two groups were slight, the experimental groups had more improvements than the control groups on 19 out of 25 comparisons. The study suggested that the experimental procedure had advantages for improving compositions (13). However, McGrew (1969) used a convenient sampling method which might have limited the generalization of the research findings (7). To find the most effective evaluative instructive technique, Coleman (1972) conducted research on the relationship between the mode of teacher commentary (cassette commentary versus marginal-interlinear-terminal commentary feedback) and overall student achievement in composition writing (3). In addition, the study examined students’ perceptions on the comments. The researcher conducted pretest-interim-posttest quasiexperiment for a period of nine weeks of the second semester in 19711972. The study was conducted in four intact English classes, ninth grade (n=101) with the final sample size of 73 students who wrote all essays, at two secondary schools in western Pennsylvania. The experimental groups received the taped commentary on their essays while the control groups received traditional marginal-interlinear-terminal commentary feedback. Both experimental and control groups wrote the pretest, an impromptu essay, during the first week of February. Then, the groups wrote the interim test, an impromptu, during the fourth or fifth weeks toward the
32
Chapter Two
middle of the experiment. The posttest, another impromptu, was written during the last week in March. The teachers who volunteered to participate in the experiment had the freedom to choose their own themes. The researcher recorded the audio feedback based on the teachers’ written comments for the experimental groups because the recording put heavy demands on teacher time. At the end of the experiment, all essays were randomly assigned to three raters whose scores were summed to get a single score for each essay. Interrater reliability was established at .86. The raters were given the Diederich scale for grading students’ essays including the comments on the first impression of general merit; richness and soundness of ideas; development and support of ideas; relevance of ideas to topic and purpose; form: organization and analysis; style, interest, sincerity; choice and arrangement of words; grammar and usage; punctuation, capitalization, etc.; and spelling. Audio feedback was provided with no markings where teachers’ written comments were recorded on the tape by the researcher. Students listened to the audio commentary on their papers by using individual ear plugs. In addition, two students were selected for closer evaluation. Even though there was no statistically significant result from the school with white middle-class youth in the first school, the experimental group performed better than the control group. For the second school with 99% black students, there was a significant difference in the direction predicted by the researcher. The experimental group progressed across all categories of evaluation. The differences between the two schools and their results were explained in that the child who was at the greatest distance from the maturation ceiling had more room to grow than the child whose present performance was closer to the maturation ceiling (99). The researcher explained that the taping was done by the researcher which contaminated some results. She recommended asking teachers to provide both types of feedback in order to get clear data. Similarly, in other experimental research, Carson and McTasney (1978) undertook a pilot cassette-grading program in a technical writing course at the United States Air Force Academy in need of a better grading method (109). The researchers found that cassette grading allowed the time to explain without exceeding the limits of allotted time in order that students might learn to write. Low cost, privacy with the use of the portable earphone and the ready availability of playback equipment were also advantages in using cassette grading. Carson and McTasney (1978) found that, even though cassette grading did not save instructor time, it did enable the instructor to convey to the student three or four times as much information in a much more personally coherent manner (114). Carson and
Review of the Literature
33
McTasney (1978) found that the use of cassette in grading gave more complete, detailed explanations in less time; let the students listen as they looked at the mistakes, and used inflection and volume in conveying the exact instructor meanings (118). Later, Moore (1977) conducted an experiment using audio cassette tapes to provide individualized feedback at Purdue University (3). The classes involved were two 200-level Introduction to Teaching courses with 50 students and two 400 level teaching methods classes with 31 students. The aim was to find an alternative method which required less professor’s time and yet provided in-depth feedback for students’ improvement. Students within each class were randomly assigned to one of the two treatment groups at the start of the semester. The treatments were audiorecorded feedback on assignments and written feedback on assignments. The results of the experiment found evidence that using individualized feedback via audio cassette tapes was a viable procedure. It saved faculty time and students had a more positive attitude toward tape recorded feedback than written feedback. However, there was no statistical difference between the two methods in improving students’ performance, but there was a significant difference between the first and last scores that indicated feedback improved students’ performance. Thus, understanding and conveying meaning have become another thread in the investigation of audio feedback effectiveness. Likewise, Clark (1981) was concerned with students’ understanding of written comments and the time for meaningful comments to the written reports in a Business Communication class at the University of North Carolina at Charlotte (40). The researcher found that the cassette tape recorder could become a mechanical friend to help grade papers with more completeness and more compassion because the technique allowed the tone of voice to convey a great deal of information, adding a human dimension to evaluations (41). Concerns about misunderstanding between studentwriters and teacher-responders led Sommers (1989) to find an alternative method for providing feedback on students’ writing (49). The study found that tape-recorded comments encouraged individualized instruction. The technique was more understandable to students because audio feedback allowed instructors to make comments more clearly and in a more detailed fashion. Later, Sommers (2002), in another description of using spoken response on students’ writing, found that tape-recorded comments are more time-efficient than written comments, allowing teachers to expand their responses and thus more easily to offer movies-of-the-mind (174). Later, Sommers, as a teacher, conducted additional research in what students thought about tape-recorded response in college composition
34
Chapter Two
courses. Sommers conducted research with Mellen, a student in 2003; they found: 1) student’s confidence had increased; 2) feedback encouraged revising; the tape was the enhancement of meaning; 3) it was more personal; 4) students had time to consider remarks; 5) it was convenient for students; and 6) feedback established the professor’s credential (Mellen and Sommers 2003, 25-37). Yarbro and Angevine (1982) conducted an experiment with 38 students in the control group and 32 students in the experimental group. In this study, the experimental group received feedback provided by cassette tapes and feedback in the control group was done by traditional marginal comments (394). All students were given a pretest at the beginning of class and a post-test at the end of the class with 45 minute writing spans on the same topic choices. Three evaluators graded pre and posttests. The analysis of data revealed that the posttest scores were lower than pretest and the researchers had no explanation for the discrepancy between evaluators. Yarbro and Angevine (1982) found that the cassette tape grading method took more time, but the technique could increase students’ interest in the class (396). However, the research did not indicate any significant difference in the methods of instruction as related to student performance. Interestingly, Clark (1985) considered that audiotapes could be useful for basic writing students in order to supplement tutorial or classroom instruction (120). Clark (1985) found that listening to audiotapes while reading enabled students to process information through two language channels (auditory and visual) simultaneously (121). In addition, Clark (1985) found that the combination of reading aloud and listening not only facilitated editing, but also helped students develop their own voices and better comprehend texts (121). Likewise, Cryer and Kaikumba (1987), by looking for a more effective method of giving feedback on written work in less time, also found that audio feedback could save time; it could allow the teacher to avoid stress in constructing writing feedback; and teachers could present feedback in a more informed and helpful manner, i.e., a tone of voice to motivate or to soften criticism and give encouragement (150). Later, Hunt (1989) also found taped comments on students’ writing an effective method compared to the drawback of the written traditional method; taped comments worked well for both students and teachers because they can be focused on important features about responding to writing (273). Pearce and Ackley (1995), in a four-year exploratory study, did not find evidence that audio feedback could save teachers’ time but it could lead to improved feedback, context, performance, and motivate student
Review of the Literature
35
writers (31). Berner, Boswell, and Kahan (1996) conducted qualitative research on how students felt about receiving feedback on tape, and if they agreed with teachers on the benefits and possible disadvantages of this feedback (339). The participants for the study were undergraduate students enrolled in an Effective Written Communication course in Management, Education, Social Work, Electrical Engineering, and Mechanical Engineering at McGill University in Canada. The study’s findings revealed that audio feedback was more detailed and more specific than written comments; it was more spontaneous and more honest; and it provided immediate reaction (Berner, Boswell, and Kahan 1996, 352). The study found that students preferred conferences because it is face to face, while instructors thought that during conferences students usually interrupted the teachers to defend their writing and it did not save time. Berner, Boswell, and Kahan (1996) recommended this approach not for all students, but for those who were interested in improving and who put the same efforts into writing as instructors put into giving feedback (352). Finally, it is important to note Berner, Boswell, and Kahan’s concerns on taped comments for EFL students since EFL students require different instructional feedback strategies than native speakers do and audio feedback has the potential to provide such a type of feedback for them (352). The researchers made a valuable observation that audio feedback for EFL students should be used differently, and audio feedback should provide them with effective help for their language difficulties.
Audio Feedback in EFL/ESL Overall, studies on using audio feedback in traditional EFL classrooms revealed that audio feedback gave EFL teachers an opportunity to provide content critique and that the technique allows the teacher to offer more comprehensive and clearer explanations about the function of the text in its social context, the relationship it crystallizes between writer and audience, the effectiveness of its thematic development, and its overall impact on the reader (Boswood and Dwyer 1995; Farnsworth 1974). Providing feedback on content in ESL has been an important issue and it has been widely discussed in the literature. Following the importance of arguments for providing instructional feedback on content in ESL, Syncox (2003) found that audio feedback can build the link between the revision of a draft, the perception of instructor feedback, and the intended meaning of the writing to students and instructors (75). The researcher found that audio feedback allows the instructor to expand on the problem of
36
Chapter Two
understanding meaning from a variety of different angles in the form of models and prompts. Taking into consideration that audio feedback with its ability to provide content critique can change the way to provide feedback in EFL, it is necessary to emphasize that Hyland (1990) already stated that EFL teachers need to provide constructive feedback on the EFL students’ work (279). Hyland noted that EFL students need to see writing as a means of learning (1990, 279). But in an effort to provide more constructive feedback, EFL teachers usually have to provide a great amount of input for feedback because it has been revealed that EFL students need corrective feedback both for language accuracy and the writing content (Patrie 1989, 87). However, it is believed that audio feedback can solve this problem because this technique is able to provide not only the correction of specific language errors but also it gives assistance in the correction and improvement of content-related problems, in the organization of students’ papers, their use of appropriate style in choice of words and phrasing, and their clarity and coherence (Boswood and Dwyer 1995, 54; Farnsworth 1974, 288). The understanding of recorded comments by EFL students was one of the issues that studies investigated in the 1970s. One of the first studies on audio feedback for EFL students explored the advantages of the use of audio feedback in the correction of ESL compositions at the intermediate to advanced levels (Farnsworth 1974). The researcher received positive feedback from students after the experiment. Even though the researcher was doubtful about students’ ability to understand the recorded voice, all students in the study preferred audio feedback in addition to the written marks in the margins. Likewise, Boswood and Dwyer found that EFL students did not have difficulty in understanding audio feedback (1995, 54). The researchers argued that positive impacts were visible in the rewrites of drafts and the students’ self-reporting; audiotaped feedback allowed teachers to offer more detailed response; and this alone increased integration of classroom learning with student practice. Similarly, Johanson found that audio-feedback offered the opportunity to demonstrate the processes employed in negotiating the meaning of students’ text (1999, 34). However, other studies found that EFL students prefer written feedback and consistently rate it more highly than peer feedback or oral feedback in EFL writing conferences with teachers (Saito 1994, 65; Zhang 1995, 217). At the same time, Hyland and Hyland found that EFL students like to receive written feedback in combination with oral feedback (2006, 87).
Review of the Literature
37
Contrary to studies on audio feedback in the first language writing, studies on the effectiveness of audio feedback for EFL students have shown that using audio feedback could be an effective instructional and tutorial tool for better student comprehension of grammar, punctuation, and rhetoric (Olsen 1982, 122-23). Similarly, Hyland (1990) found that using taped comments helped to reinforce the written assignment with an authentic listening exercise which had an extra motivating factor (284). Later, other studies have found using audio feedback could be an effective instructional tool to develop students’ listening and speaking skills (Boswood and Dwyer 1995, 53; Johanson 1999, 33; Hsu, Wang, and Comac 2008, 192). The studies have argued that using audio feedback in EFL writing puts the act of listening under students’ control, allowing for listening at the students’ own rate and for repeated listening (Boswood and Dwyer 1995, 53). Audio feedback enhanced the quality of responses because, as Johanson noticed, the majority of EFL students were taught English at home and they tended to view writing as a grammatical exercise rather than a process of constructing meaning (1999, 32). EFL students believed that the content and organization of their essays were subordinate to sentence-level grammatical accuracy; they tended to downplay comments on how to develop their ideas at the expense of grammatical issues such as proper subject-verb agreement. Similar to studies on using audio feedback in the first language writing, researchers on audio feedback for EFL students have also compared the time commitment for audio feedback and for written feedback. Hyland (1990) stated that EFL teachers found that the technique saved time because voice comments were quicker than writing them (284). Johanson also found that this method saved both teachers’ and students’ time (1999, 37). However, other studies found that audio feedback requires more involvement and efforts from EFL teachers, but they are able to provide feedback with more detail, variety, and sophistication that written feedback cannot provide (Boswood and Dwyer 1995, 55). In addition, audio feedback cannot become an effective instructional tool by itself; it can be more effective when it is employed with peer editing and teacherstudent conferences, and, what is more important, audio feedback can be used at all levels of EFL composition (Johanson 1999, 38). Patrie (1989) argued that audio feedback could change the nature of feedback because it was different in the quantity and quality of teachers’ commentary input during the same amount of time (88). The researcher revealed that audio feedback encourages qualitative changes in the content and focus of teachers’ comments; it allows putting more feedback because
38
Chapter Two
we talk a lot faster than we write. Likewise, Hyland (1990) described taped commentary as productive feedback for EFL situations with intermediate and advanced students (285). The study found that the technique was useful in encouraging students to respond to feedback; it allowed more detailed, natural, and informative remarks while increasing teacher-student rapport. Similarly, Johanson (1999) found that audiofeedback was an effective alternative to traditional written comments (31). Johanson (1999) found that the technique let instructors speak the comments instead of scribbling remarks in the margins; it allowed instructors to provide students with a holistic impression from a writing coach (33). Price and Holman (1996) conducted an experiment with Hispanic students in a large Southwestern University (4). The researchers found that taping comments allowed students the privacy to revise in a relaxed atmosphere at their own speed, which was especially helpful to second language writers. Price and Holman (1996) also found that Spanishspeaking bilingual students were less likely to consider they were good writers at the beginning of the semester, having been struggling with the basics of a second language and suffering from prior negative feedback from teachers; the experimental taped feedback was considered even more a success with second language writers than with native writers (6-7). Huang (2000) conducted a quantitative analysis of audiotaped feedback for college sophomore English majors at a university in Taiwan (199). The study compared the effectiveness of audio-taped feedback with traditional written feedback. Huang (2000) found that audio-taped feedback combined with text was much more effective than only written comments in terms of the quantity of feedback. Students favored audiotaped feedback (2000, 199). Huang also found that teachers saved time with audio-taped feedback in the sense that they discussed writing problems more thoroughly and they were able to provide more detail than when using only written feedback (2000, 209). Findings in this study supported previous research findings about the effectiveness of audiotaped feedback compared to written feedback: 1) the feedback is more helpful in understanding writing; 2) it can be listened to many times; 3) it motivates students to understand writing problems and revise; 4) it allowed listening to comments and revising simultaneously; and 5) it forced listening to comments attentively. Thus, taking into consideration research findings that audio feedback in a traditional EFL writing class could be viewed as an effective instructional technique which impacts students’ learning experience, it may be assumed that audio feedback may also satisfy online students’ needs by providing support from faculty and
Review of the Literature
39
by building improved learning performance. The next section describes using audio feedback in online environments, specifically in asynchronous online courses.
Audio Feedback in Distance Education Audio feedback provided online is viewed as a technique where instructors record their comments on students’ assignments, which students can listen to as they read along the instructional comments in the text (Ice 2008). The central component of the audio feedback effect provided online is the asynchronous aspect of the comments when students may listen to previously recorded audio while they are reading what it is referring to (Ice 2008). Audio comments are usually inserted or embedded in documents such as Microsoft Word or Adobe Acrobat Professional. This study used the employment of embedded asynchronous audio feedback provided for EFL students in asynchronous online discussions (Ice et al. 2007, 8). One of the first studies on using audio feedback in online environments was conducted in the 1980s by Kelly and Ryan (1983). Later, Kirschner, van den Brink, and Meester (1991) conducted an experiment using audio feedback for writing essays in distance education. The study was conducted at the Open University of the Netherlands for a university level course in photochemistry. Even though the researchers did not find significant differences in the amount of time spent in the preparation of the audio as well as in the students’ final grades, they recommended examining whether the increase in the quality of the essays reported by other researchers also occurred in a distance education setting. Besides, other studies on audio feedback found that the instructional tone and style of communication provided online by using computerized technologies could give much help to students; and the students rated a teacher more highly (Anson 1997, 106). Furthermore, Jelfs and Whitelock (2000) conducted a qualitative study on the notion of presence when using audio feedback in virtual environments (145). By conducting interviews, the researchers found that audio feedback was perceived to be one of the most important features that engendered a sense of presence. The researchers indicated positive impressions on the use of audio feedback; students found audio feedback brought about easier navigation in virtual environments. Likewise, Sipple (2007) conducted another qualitative study to determine students’ attitude toward audio and written commentary in developmental writing classes (22). The researcher examined how audio-recorded instructor commentary in writing classes
40
Chapter Two
delivered via e-mail could provide a more effective method for students who needed individualized instruction. The results of the study showed the students’ preference for audio feedback. Students perceived positively the impact of audio feedback on their motivation, self-confidence, revision practices, student/professor bond, and overall learning in ways written commentary did not. Additionally, Hsu, Wang, and Comac (2008) conducted a field experiment to explore how the use of audio feedback improves second language performance (181). The results indicated that the use of audio feedback met instructional needs permitting individualized audio feedback to improve second language performance. The Joint Information Systems Committee (JISC), funded by the Higher Education Authority in Great Britain, has conducted the project entitled the JISC e-Learning Programme to explore the ways of improving feedback for students through the use of digital audio and screen-visual feedback technology. The project aims at the use of emerged technologies to improve the feedback and feed-forward interaction between tutor and students, to refine understanding of the impact of technology-enhanced feedback methods on staff and students, to encourage academics to respond to key factors in effective feedback, to test specific research methodologies and to provide a collection of resources and items for dissemination. In the publication Effective Practice in a Digital Age (2009), it is stated that approximately 90% of students responded positively to receiving audio-recorded feedback. Students found audio feedback more personal and relevant to their needs; audio feedback can be helpful for EFL students and students with special needs. There are studies conducted under the JISC e-Learning Programme on different delivery methods of audio feedback and in different contexts, such as types of students and types of educational institutions. One of them is the qualitative study conducted by Orsmond, Merry, and Reiling (2005). The study found that audio feedback helped to enhance student motivation, learning, reflection, and clarification because it was easier for students to understand how to improve their work, it had more depth and was more detailed, and it was a more personal way than writing (369). In this study, students liked being able to pause, rewind, and play recorded comments again while making notes while listening. In other qualitative research, Merry and Orsmond (2007) examined the effectiveness and feasibility of providing feedback on academic work to students using mp3 audio files (100). The study found that, even though providing feedback was time consuming, audio feedback could become particularly influential in students’ learning because it is detailed, prompt, and understandable to students. In this study, students found audio feedback was more detailed
Review of the Literature
41
than written comments, and tutors found themselves naturally providing examples in their audio feedback of how the work might be changed. Furthermore, Rotheram (2007) also analyzed effectiveness of the technique using an mp3 recorder to give feedback on student assignments at Leeds Metropolitan University (7). Rotheram (2007) stated that audio feedback could powerfully influence student learning because feedback was timely, perceived as relevant, meaningful, and suggested ways of improvement (7). Contrary to Merry and Orsmond (2007), the researcher found that audio feedback reduced time spent and increased the amount of feedback given (8). Nortcliffe and Middleton (2008) investigated if the iPod and the phone with audio feedback supported the learning of the iPod generation (45). Nortcliffe and Middleton (2008) also investigated the ways of integration between physical and virtual learning spaces in order to offer a richer, more meaningful and formative learning experience when they implemented Blackboard for a blended learning environment (45). The researchers compared summative assessment results for using recorded audio feedback in formative and summative assignments to that of formative and summative feedback in an aural and/or written form. The researchers found that audio feedback through iPod may significantly impact students’ academic performance; the students found audio feedback helpful as a way to help them clarify how they could convert the feedback into actions towards improving their writing submissions. There are also other studies conducted by the JISC e-Learning Programme that investigated using podcasting to provide audio feedback. For example, France and Wheeler (2007) examined 26 students’ perceptions and attitudes to podcasting assignment feedback. The study used data from a pre/post questionnaire survey and a focus group discussion in Climate Change and Natural Hazard Management (9). The results demonstrated that podcasting improved student learning experience at the University of Chester. The rationale for the study was to provide effective feedback as a vital component of students’ ongoing learning. They found that podcasting feedback provided an opportunity to support a range of learners’ styles and increase student engagement through reflection. Furthermore, Roberts (2008) also examined the perceived effectiveness of podcasting feedback to a small group of 8 undergraduate students at Liverpool John Moores University (1). Roberts (2008) found that students perceived that podcasting feedback sped up the learning process because feedback was delivered directly to the student via email; it had a personal touch and clarity (3). Likewise, Hill (2008) examined the use of podcasts in the delivery of feedback to dissertation students at the
42
Chapter Two
University of Gloucestershire (1). The researcher found that providing feedback via podcasts proved useful during the early stage of the dissertation process when large amounts of valuable feedback needed to be transmitted to the student. Another study conducted by Micklewright (2008) also provided podcasting feedback to a class of 68 final year sports science undergraduates that related to written assignments (under “Methods”). The researcher found that audio feedback was helpful for the students because it provided more information with better quality. Finally, Rodway-Dyer, Dunne, and Newcombe (2009) conducted survey research on using audio feedback for first-year geography undergraduates’ written assignments (61). The survey results showed that the majority of students found audio feedback had greater detail and was easier and clearer. However, unlike previous studies’ findings, this study found that some students perceived tone of voice as a negative experience. In an effort to better understand the effectiveness of audio feedback in the online environment, studies (Curthrell, Fogarty, and Anderson 2009; Ice et al. 2007; Oomen-Early et al. 2008) examined the effects of audio feedback on course design, teacher presence, and students’ sense of community in asynchronous online environments. Ice et al. (2007) in a mixed methods case study examined the effect of asynchronous audio feedback on graduate students’ sense of community and more personalized communication with their instructors when they enrolled in an advanced graduate course that was provided asynchronously online (3). Extensive qualitative data, in the form of written responses to open-ended questions of the survey and a questionnaire, students’ emails, semi-structured interviews, and the final project document were collected to examine two different issues: students’ perceptions of using audio feedback and the type of the strategies for audio or text feedback. Quantitative data, in the form of the responses to Likert-scale questions and the quantified course artifacts (audio and text feedback documents), were nested and collected to examine two issues: to support qualitative findings about students’ perceptions of using audio feedback and to compare time requirements and quantity feedback for audio vs. text. The researchers found students preferred asynchronous audio feedback as compared to traditional text based feedback because of understanding nuances. Audio feedback impacted students’ perceptions of the instructor’s use of humor and openness toward and encouragement of discussion. The study also revealed that asynchronous audio feedback increased feelings of involvement and of instructor concern for the students. Students retained information from audio feedback better than from text-based feedback. Ice et al. (2007) reported that more than 70% of students receiving audio feedback applied
Review of the Literature
43
content in a more cognitively complex way at the highest levels of Bloom’s Taxonomy (analysis, synthesis, evaluation) (19). Oomen-Early et al. (2008) conducted survey research to examine the perceived effectiveness of asynchronous audio feedback among 156 graduate and undergraduate students enrolled in an online course (267). The researchers found that asynchronous audio feedback can enhance instructor presence, student engagement, content knowledge, and overall course satisfaction. The students in this study preferred receiving both audio and text-based feedback rather than audio feedback by itself. Likewise, Cuthrell, Fogarty, and Anderson (2009) found that both graduate and undergraduate students indicated clarity and personalization of audio feedback; the technique provided a greater degree of detail than the written comments; and the students enjoyed hearing the voice of their professor (under “Emerging Themes”). Following the study of Ice et al. (2007) and Ooomen-Early et al. (2008), Ice (2008) conducted a multiinstitutional study by administering audio feedback survey at 15 institutions for 1138 students to confirm the previous studies’ findings. The survey indicated that students preferred audio over text feedback in relation to clarity, motivation, retention, presence, and level of care provided by the instructor. However, the researchers noted that the technique may not be effective if the instructor is not a native speaker of English. Ice et al. (2010), conducting quasi-experimental research for graduate students at three U.S. universities, examined students’ preferences for textbased, audio-based, and a combination of text- and audio-based feedback (113). The researchers found that the combination of written and audio feedback is perceived to be the most effective type of feedback. However, written feedback was the most effective at the level of word-choice, grammar, punctuation, spelling, and technical style. The study findings showed that a small amount of audio and a large amount of written feedback is the most effective at this level. For example, minor errors such as spelling or punctuation can be marked-up with one holistic comment regarding the general class of errors (Ice et al. 2010, 126). Finally, for future research a larger population of EFL students’ needs to be studied utilizing combined written and audio feedback toward examining the value of written comments for EFL learners. Olesova et al. (2011a) examined the effectiveness of audio feedback for EFL and ESL students when audio feedback was provided by the NNEST (30). The study found that audio feedback can be effective in online environments to increase student’s engagement and great understanding of the instructor’s intent because of the availability of tone .
44
Chapter Two
and intonation. The students perceived audio feedback as personal and enjoyable, and it helped increase their interest and feel the instructor’s care. However, EFL and ESL students were different in their perceptions that inflection in the instructor’s voice made her intent clear when providing audio feedback. The students were also different in their perceptions of some items of teaching presence and cognitive presence as measured by the Community of Inquiry Survey developed by Arbaugh, Cleveland-Innes, Diaz, Garrison, Ice, Richardson, and Swan (2008). Thus, it is still not clear if audio feedback can be an effective technique for EFL students when they enroll in asynchronous online courses and if it can impact their higher-order learning. It is still an unexplored area and open to experimentation because limited empirical research has yet been done regarding second language status and using audio feedback (Ice et al. 2010, 127).
Summary To summarize, taking into consideration that instructional feedback may influence overall student learning and students need constructive feedback on their performance to improve their work in the future, further research is needed to examine if feedback is effective within constructivist learning environments, specifically, in asynchronous online environments. The literature revealed that feedback in asynchronous online environments supports students’ online success. However, the problem of not understanding the instructional feedback can cause students to have negative perceptions of their online learning. This problem is more severe for the students whose native language is not English, for EFL students who have difficulty understanding their instructor or peers who are native speakers of English. Research revealed that EFL students can benefit from participating in asynchronous online discussions; they can express their views and critical reflection because asynchronous online communication allows them more time to think and respond. Nevertheless, EFL students’ second language barriers and lack of verbal clues in asynchronous online environments might restrict their contributions to online discussions because asynchronous online courses require written communication in a target language. To provide successful feedback for EFL students, it is necessary to understand the nature of feedback in second language learning. Research has shown that feedback in second language learning can be provided for different purposes: 1) to correct language accuracy and 2) to provide feedback on content. Feedback can be delivered in written, oral
Review of the Literature
45
and electronic ways for EFL students. All three ways can be provided for error correction and content improvement. Written feedback has been viewed as less personal while oral feedback has been perceived as not effective for EFL students because the students do not have time to keep notes during oral conferences. Researchers and practitioners have, therefore, looked for more effective ways to provide feedback for EFL students’ writing. One technique, audio feedback, has been viewed as the most effective way to combine both written and oral ways to provide successful instructional feedback for EFL students, both on error correction and content improvements. The literature has reviewed audio feedback in traditional and online environments. The majority of studies have shown the effectiveness of the technique for EFL students. However, it is little known if audio feedback can be effective for EFL students when they participate in asynchronous online discussions. It is not yet known if they are able to understand instructional comments through asynchronous embedded audio feedback. The literature is limited to providing empirical evidence that constructive audio feedback can impact EFL students’ higher-order learning. In addition, the literature has shown the effectiveness of the technique provided by NESTs in both traditional and online writing environments, by NNESTs in traditional writing class, and limited research has been done concerning audio feedback provided by NNESTs in asynchronous online environments. Furthermore, no studies have investigated whether asynchronous embedded audio feedback can impact second language learners’ higher-order learning when audio feedback is provided by NNESTs versus NESTs.
CHAPTER THREE METHODS
Overview The purpose of this study was to examine the effect of asynchronous embedded audio feedback on EFL students’ higher-order learning and their perceptions of the audio feedback versus text-based feedback in asynchronous online discussions. This study also examined how the impact and perceptions were different between providers of feedback (NNEST versus NEST) and levels of language proficiency (high versus low). The research questions were: RQ1: Is there a significant difference in scores on the quality of weekly discussion posting by types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency? RQ2: Is there any interaction effect between the types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency on the scores on the quality of weekly discussion posting? RQ3: Is there a significant difference in scores on perceptions of the type of feedback delivery method by instructor’s language background and/or student’s level of language proficiency? RQ4: Is there any interaction effect between instructor’s language background and student’s level of language proficiency scores on perceptions of the type of feedback delivery method? This chapter describes the theoretical framework which guided the study, participants and sampling method, research design, dependent and independent variables, procedure, reliability and validity of the instruments used, data analysis, and threats to validity.
48
Chapter Three
Theoretical Framework This study was guided by social constructivist pedagogical theory (Driscoll 1999, 409; Jonassen 1994, 37). Social constructivism stresses the importance of feedback for constructed behavior to help learners construct their own reality or knowledge, and this constructed knowledge is formed through learners’ interpretation of previous experiences of the external world, mental structures, and beliefs. According to social constructivist theory, both learners and teachers learn by engaging in a dialogue and by interacting verbally with others to construct meaning (Pear and CroneTodd 2002, 221). Asynchronous online discussions have been viewed as a learning constructivist environment; learners, by engaging in discussions, construct their own solutions to problems by interacting with others and learning from them. In addition, constructivist feedback can help increase the quality of online discussions (Ertmer et al. 2007, 412) since constructivist feedback may occur in the form of discussion among learners and through comparisons of internally structured knowledge (Mory, 2004, 745). Indeed, feedback from a constructivist view would be a more effective tool for knowledge construction when learning from others occurs. This study was guided by suggestions for the use and function of feedback within the philosophy of constructivism by Jonassen (1991) (Table 3-1). Table 3- 1 Assumptions of Constructivism (from Jonassen 1991, 9) and Suggested Use of Feedback (from Mory 2004, 771) Constructivism Assumption Feedback Reality is determined by knower Feedback is to guide learner toward Mind acts as builder of symbols internal reality; facilitates Thought grows out of human knowledge construction experience Feedback aids learner in building Meaning does not rely on symbols correspondence to world Feedback in context of human determined by understander experience Symbols are tools for constructing Meaning within feedback an internal reality information determined by internal understanding Feedback provides generative, mental construction “tool kits”
Methods
49
Participants and Sampling Method The target population of this study consisted of EFL students enrolled in an asynchronous online course delivered in English but who remained physically within their own country. Sixty-nine volunteers from the International Relations Program at North-Eastern Federal University in Russia served as the study sample. However, 15 students left the experiment. Based on the expectation of observing a medium effect size for a main effect (r = 0.3) with a type of I error rate of .05, a sample size of at least 58 was needed to achieve statistical power of .80. The EFL students were told that participation in the study was voluntary. They were told that, by participating in this study, they would benefit from a new instructional technique that may help them in learning English and participation in future asynchronous online courses. Out of 69 participants, 21 were male and 48 female. The participants ranged in age from 18 to 23, with a mean age of 20.5. They had been studying English for more than eleven years (including eight years in school). Table 3-2 shows the participants’ demographic data. Table 3- 2 Participants’ Demographics Categories Gender Male Female Age 18-19 20-21 22-23 Ethnicity Yakut (Sakha) Russian Evenk, Even Kazakh Native Languages Sakha and Russian Russian only
Number of Participants 21 48 21 44 4 64 3 1 1 54 15
Chapter Three
50
Table 3-2 Cont. Categories Other Foreign Languages Chinese French German Japanese Korean Turkish
Number of Participants 14 15 14 11 14 1
Study Abroad Experience Yes No
32 37
Previous Online Experience Online course
4
Research Design This quantitative research utilized a quasi-experimental design with one within-subject factor and two between-subjects factors to address the proposed research purpose, which is displayed in Figure 3-1. In the design, participants were assigned to one of two levels of language proficiency and two types of instructors (NNEST and NEST), but all of them experienced both types of feedback delivery methods (audio feedback and text-based feedback). The purpose for choosing the repeated design was that the design was efficient with a higher power with fewer participants than between group ANOVA designs. This design had advantages because the participants served as their own control in a perfectly matched experimental condition (Johnson and Christensen 2008, 320-21).
Methods
51
*TF - text-based feedback; AF – audio feedback Figure 3- 1 An experimental design with one within-subject factor and two between-subjects factors.
Dependent Variables The two dependent variables were (a) scores on the quality of weekly discussion posting as measured by the scoring rubric (Ertmer and Stepich 2004, under “Learning Outcomes”), and (b) participants’ perceptions of embedded audio feedback as measured through the audio feedback survey to examine students’ responses to audio and text-based feedback (Ice 2008). Each dependent variable is described below in detail.
Quality of Online Posting The first dependent variable was defined as the quality of participants’ weekly discussion postings that were evaluated with the pretest and two
52
Chapter Three
posttests for the type of feedback delivery method during the six-week experiment by using the 4-point scoring rubric. Participants’ initial discussion postings and responses to 1-2 participants within the group were examined separately on the assigned numerical score (1-4) based on the criteria developed by Ertmer and Stepich (2004) and related to the quality of the post (under “Learning Outcomes”; Table 3-3). The scoring rubric provided the students and both instructors, including the researcher of this study who delivered feedback as the NNEST instructor, with specific guidelines for determining the levels of higher-order thinking within participants’ online postings. The participants were instructed on how to use the scoring rubric prior to the experimental study. For consistency in scoring the quality of the online postings, the researcher and participants used the same scoring rubric. The participants also used the rubric information for their reference and they were able to understand how their posting was scored. The scoring rubric was used to repeatedly measure the quality of the participants’ weekly online postings (Ertmer and Stepich 2004, under “Learning Outcomes”). The rubric requires online postings to be significant in helping the discussion move forward. For example, providing concrete examples (from the students’ own experience), describing possible consequences or implications, challenging something that has been posted in the discussion, posing a clarifying question, suggesting a different perspective or interpretation, and pulling in related information from other sources (books, articles, and websites). The participation score for a given week was based on the quality as well as the quantity of messages the participants posted to that discussion.
Methods
53
Table 3- 3 The Scoring Rubric Criteria Timeliness and quantity of discussion responses
Excellent 3-4 or more postings; well distributed throughout the week
Good 2-3 postings distributed throughout the week.
Responsivenes s to discussion topic and demonstration of knowledge and understanding from assigned readings.
Readings were understood and incorporated into discussion as relates to topic.
Readings were understood and incorporate d into discussion as relates to topic.
Ability of postings to move discussion forward.
Two or more responses add significantly to the discussions (e.g. identifying important relationships, offering a fresh perspective or critique of a point; offers supporting evidence).
At least one posting adds significantl y to the discussion.
POINTS
4
3
Fair 2-3 postings; postings not distributed throughout the week Little use made of readings.
Poor 1-2 postings; postings not distributed throughout the week Little or no use made of readings.
At least two postings supplement or add moderately to the discussion
Postings have questionabl e relationship to discussion question and/or readings; they are nonsubstantive. Postings do little to move discussion forward 1
2
54
Chapter Three
The criteria for postings focused on the timeliness and quantity of discussion responses as well as the responsiveness to discussion topics and the demonstration of knowledge and understanding gained from assigned readings, and the ability of postings to move discussions forward. One-totwo postings that were not distributed throughout the week with little or no use made of assigned readings and had a questionable relationship to the discussion question and/or readings received one point. Two-to-three postings that were not distributed throughout the week with little use made of readings and at least two postings that supplemented or added moderately to the discussion received two points. Two-to-three postings that were distributed throughout the week with an understanding of readings incorporated in the discussion and related to the topic and at least one posting adding significantly to the discussion received three points. Finally, 3-4 postings that were well distributed throughout the week with an understanding of readings incorporated in the discussion and related to the topic and two or more responses adding significantly to the discussion (e.g., identifying important relationships offering a fresh perspective or critique of a point offers supporting evidence) received four points. The participants’ scores range from 1 to 4 for the pretest, posttest 1, and posttest 2. Therefore, participants’ scores were holistic using all postings during a certain period of time. For example, 3-4 postings were made by a student for the pretest. Each posting itself was not scored but multiple postings were averaged to calculate one score.
Participants’ Perceptions The second dependent variable was defined as student perceptions gained from the audio feedback survey containing students’ responses to audio and text-based feedback and included in Appendix A (Ice 2008 Participants were asked to respond to the survey before and after completing the experiment. The survey focused on gathering participants’ perceptions about clarity of instructional voice, perceived motivation and retention, feeling of involvement in the online course, and feeling of instructional care by comparing audio feedback and text-based feedback. The overall scores of the participants’ perceptions were computed by taking the average of participants’ responses on the seven survey questions.
Independent Variables The three independent variables of this study were (a) level of language proficiency (high versus low), (b) type of feedback delivery
Methods
55
method (audio versus text), and (c) instructors (NNEST or NEST) who were providers of feedback (Figure 3-2).
Figure 3- 2. The relationships between the independent variables and dependent variables.
Level of Language Proficiency A standardized test for proficiency in English as a Foreign Language (TOEFL) Paper-Based Test (PBT) was used to classify participants by level of language proficiency to create the first independent variable and to assign them to the levels of language proficiency. The TOEFL examined participants’ level of listening comprehension, reading comprehension, and knowledge of English structure/grammar. The total TOEFL PBT score ranges from 310 to 677. The participants who scored above 513 were classified as having a high level of language proficiency based on the minimum score required for admission into U.S. universities for international students from the countries where English language is not native language (“TOEFL Now,” n.d.). The participants, scoring 513 and below, were classified as the low level of language proficiency. Consequently, 31 participants were assigned to the higher level of
56
Chapter Three
language proficiency, and 38 participants were assigned to the lower level of language proficiency.
Type of Feedback Delivery Method Following the research findings by Ice, Swan-Dagen, and Curtis (as cited in Ice and Richardson 2009) that audio feedback may be most powerful when combined with text and visual markups, embedded audio feedback was provided in this study by using Adobe Acrobat Professional, a program that allows instructors to record audio feedback while highlighting the online discussion postings. Adobe Acrobat Professional allowed audio recordings to be embedded into texts. Participants received a PDF file of the highlighted texts and embedded audio feedback. These texts were actually the participant’s initial posting for that week that had been saved as a PDF document. Then the PDF files were posted as attachments in the Blackboard discussion area. In addition, following Pear and Crone-Todd (2002), both types of feedback in this study were provided in the form of prompts or praise for some specific aspect of the answer (e.g., original examples, paraphrasing), directive information (e.g., information or page numbers), and comments about details and understanding in the answer (226) (Table 3-4). The only difference between the types of feedback was that the audio feedback had embedded comments with instructional voice by providing more in-depth and detailed comments to compare with text-based feedback. The order in providing the embedded audio feedback or textbased feedback was alternated to counterbalance the order effect.
Instructors The instructors who provided both types of feedback included (1) the NNEST who was a researcher of this study, a graduate research assistant, and a non-native speaker of English (Russian), and (2) the NEST who was a graduate teaching assistant in the U.S. and a native speaker of English. In November 2010, the NNEST who was also the researcher of this study conducted a training session for the NEST on how to provide both types of feedback. The NNEST also trained the NEST how to use Adobe Acrobat Professional, a program to embed the audio feedback, and how to attach the file to the discussion thread in Blackboard. Both instructors had experience with teaching online. Both had experienced providing textbased and audio feedback by using Adobe Acrobat Professional during the pilot study in the fall of 2010. After the training session, the researcher
Methods
57
checked both types of feedback to validate the quality in order to accept the format for the study. The training was effective; it helped the instructors maintain the similar quality for both types of feedback between instructors. Even though later observations revealed that the NNEST provided longer feedback than the NEST did, the quality of feedback was the same because both instructors provided feedback in the form of prompts or praise for some specific aspect of the answer, directive information, and comments about details and understanding in the answer. Table 3- 4 Example of Feedback Provided by the Instructors (NNEST and NEST) Feedback NEST: Hello! I enjoyed reading your post! Nice job of relating to the author’s point of view. As you can relate to balancing education and family, what is your specific line in the sand? How long could you be away from your family? Thanks for providing a nice example of a big decision you have to make. NNEST: This is a very nice reflection on the reading by comparing the values from the text and your own life. I think even if you don’t have the work experience; you still have your own line in the sand which you briefly mentioned – time. I agree with you. But you are not clear on how much time works for you – how you put down not to cross the line, how you define your line in the sand and what values you have to think of. Please think about some argumentation to defend your line – time. NNEST: This is a very nice reflection on the reading by comparing the values from the text and your own life. I think even if you don’t have the work experience; you still have your own line in the sand which you briefly mentioned – time. I agree with you. But you are not clear on how much time works for you – how you put down not to cross the line, how you define your line in the sand and what values you have to think of. Please think about some argumentation to defend your line – time
58
Chapter Three
NEST: Hello! Thank you for your reflection about the issues around the gender pay gap. Be careful to support your opinion with examples from the text. Note that the article did not say that “female physicians usually work less hours than male physicians.” Also, you mention that women are not “emotionally strong.” Although there may be a valid reason for why men and women doctors are not paid equally, the reasoning you mention is not stated in the text. Be sure to build on examples from the text. Additionally, be sure to fully answer all the questions. Can you propose an alternative or do you think this is necessary? You do bring up a valid point that possibly women want family-friendly benefits – this is a good way of using the text to support. Thank you for your thoughts! NNEST: I see your clear argument concerning this week reading and the problem discussed. You stated that it is unfair to pay less for women. You also claimed that payment could depend on the amount of hours. However, don’t you think that the author, by arguing the lower salary for women, also argued that women themselves choose lower-paying jobs or to work fewer hours. Amount of hours is not the solution because women initially choose fewer amounts of hours than men. How about the alternative for the family friendly benefits? Do you think that there is any way to find a good solution? You also did not cover how you would see this pay disparity if you were a physician. Please try to answer all weekly questions and use the factual information from the text to support your view. Overall good job by reflecting on this week’s problem.
Procedure The researcher contacted the International Relations Program at the North-Eastern Federal University in Russia in the fall of 2010. Upon receiving the approval of the university personnel, as well as the Institutional Research Board (IRB) at Purdue University, the researcher began recruiting participants for the experiment to be conducted in the spring of 2011. The researcher recruited participants from the International Relations Program in the fall of 2010. In November 2010, the researcher organized videoconferencing sessions with the volunteer participants to explain the purpose of the study. Once the participants were instructed in the experimental procedures and agreed to participate, the on-site instructor in Russia administered a paper-based TOEFL in order to assign the participants to the levels of language proficiency. An approved consent form was signed by all participants (Appendix B). In January 2011, the researcher registered the participants in Blackboard Open Campus. All
Methods
59
volunteer participants received an email with the instructions of how to gain access to Blackboard Open Campus. All volunteers were able to read the guidelines for participation in the study and the online discussion during the six weeks. A week before the experiment, participants completed a demographic survey and the audio feedback survey containing seven Likert-type items to examine students’ responses to audio and text-based feedback (Ice, 2008). The demographic survey items were multiple-choice and openended questions. The survey included the demographic information of the respondents and the native language(s) they speak (Russian and/or Yakut) to verify participants’ bilingual status. The survey also asked questions about participants’ previous computer experience (computer games, email, search engines, website development, social network and online communication tools like Skype) to determine the participants’ computer skills. There were rating items about the participants’ previous experience on the number of online courses taken and audio feedback received. The participants worked from the computer labs at North-Eastern Federal University in Russia. The entire quasi-experiment took place in Purdue Blackboard Open Campus. The participants were randomly assigned within the levels to either the NNEST’s group or the NEST’s group to increase the internal validity of the treatment effect. The participants in both NNEST and NEST groups had access only to their online group through Blackboard Course Management System. The guidelines of how to participate in the weekly online discussion were displayed at the beginning of each week. The guidelines also contained instructions for how the participants should respond to others within the groups. The NNEST’s group and NEST’s group received the same assignments, articles, and questions. The articles used for the online discussions were written in English and covered business-related issues. Participants were able to express their own opinions about relevant problems covered in the articles (Appendix C). The business-related problems were normally used for class discussions among the students majoring in International Relations at North-Eastern Federal University in Russia. Everything provided through the course was in English and the participants’ postings were allowed only in English. Typical questions used in the field of International Relations were also provided for the study (Appendix C). The examples of the questions are: (a) is considering an applicant’s financial status a smart admissions policy or a way for the rich to buy their way into college? Is considering an applicant’s financial aid status fair? Would you forgo financial aid (if you’re on the fence) if you think it could boost your
60
Chapter Three
child’s admissions chances? Please reflect by using your own examples and experience to support your arguments/statement; and (b) how would you take on this gender gap? If you are a physician, how else would you see this pay disparity in the field? Do you think it should hurt your salary if you also negotiate for more family-friendly working arrangements? Propose an alternative. The participants were familiar with the problems, and they were able to reflect on their own experience and their own means of practical implementation of business solutions. In addition, the topics for discussions were suitable for online discussions from different perspectives as it includes a variety of business related issues. The following topics were discussed (a) career opportunities, (b) financial aid, (c) office dress code, (d) gender pay gap, (e) business meal tips, and (f) negotiating in the workplace. The pretest of participants’ online postings was administered to measure the entrance level on the quality of their online postings during the first week of the experimental study. Then, the scores for posttest 1 were collected during week 2 and week 3 for AF1 (audio feedback) and TF1 (text-based feedback). Finally, the scores for posttest 2 were also collected during week 4 and week 5 for AF2 (audio feedback) and TF2 (text-based feedback). Therefore, all the scores were collected following the weeks when audio feedback and/or text-based feedback were provided. The pretest, posttest 1, and posttest 2 results were used to measure changes in the quality of online postings over three time periods. The researcher scored all online postings during a six-week study. Then, the results were arranged according to the type of feedback delivery method (audio versus text) that the participants received during the three time periods (pretest, posttest 1, and posttest 2). For example, after each time period, the scores obtained from the online postings for the participants, who received text-based feedback or audio feedback, were computed separately. It should be noted here that some participants received text-based feedback and some received audio feedback during the same week for the same discussion questions. Then, the researcher computed the average for the scores obtained after each time period for each type of feedback delivery method separately. The participants usually sent their initial online postings by Wednesday every week. Then, they were instructed to read others’ postings and respond to 1-2 of them by Friday. Table 3-5 shows the sample of one of the students’ weekly responses to the question asked with the weekly score given for participation. The instructors usually posted the individual feedback by the end of the week switching the feedback
Methods
61
delivery method by order every week. For example, if a participant received text-based feedback during the previous week, he/she was switched to audio feedback for the current week. In general, all participants received three text-based feedback responses and three audio feedback responses during three time periods. Data from the scores on the quality and quantity of online posting were collected every week. At the end of the study, data were collected from the participants’ responses on the post course survey to examine students’ responses to audio and textbased feedback which was provided online. Table 3- 5 Sample of the Student’s Weekly Responses and the Type of Question Discussion Question #5 How would you take on this gender gap? If you are a physician, how else would you see this pay disparity in the field? Do you think it should hurt your salary if you also negotiate for more familyfriendly working arrangements? Propose an alternative.
Student’s Online Postings Initial Response: Very interesting information is given in this text according to salary comparing gender. If I were a physician I would not like this difference in salary only because of not being a man. I find it quite unfair towards women doctors as such attitude shows that here we can observe gender discrimination. So in the case of doctor I wouldn't like the fact that my work is treated as $174,000 a year while men's (the same that I do) is more than mine by nearly 17%. Recently I have noticed that people are more inclined to men surgeon than women and that is why male doctors get more so the quantity of clients plays big role. My mother is a doctor too but she doesn't survey distinctions in gender wages. However there are some other points e.g. length of work, the category and scientific work presence. As for the note of Anthony Lo Sasso that the reason can be explained as the eagerness of women to make their work more flexible and oriented to familyfriendly benefits I think it can be so because the woman should not only work but also take care of her children and to be the hostess of her house. These her duties make things more difficult and being that engaged in everything she has to seek for a job with suitable conditions. In my opinion salary of people who negotiate for family-friendly working arrangements should be cut with regard to concessions that are made. I think it will be right to do so because everybody should have either the same rights or terms of work and sure thing payment. Response to other student: Hi, very nice point of view. I agree that there should be some explanation and I like yours. But there is no such disparity in other professions. Why and what do you think? Score: 3/4
62
Chapter Three
Reliability and Validity The Scoring Rubric The rubric used to grade the discussions was developed by Ertmer and Stepich (2004, under “Learning Outcomes”). The purpose of this rubric is to determine the quality of thinking embedded within students’ online postings. The scoring rubric was based on the level of cognitive skill using Bloom's taxonomy which had been successfully implemented by the researchers in other courses. It was noted that the use of Bloom's taxonomy as the basis for the scoring rubric provided a relatively high degree of content validity to distinguish between higher and lower levels of thinking (Ertmer and Stepich 2004). Using Bloom’s taxonomy, Ertmer and Stepich (2004) determined whether (1) the postings demonstrated the knowledge, comprehension, and application; (2) the postings showed analysis, synthesis, or evaluation, and (3) non-substantive postings. The scoring rubric by Ertmer and Stepich (2004) provided a relatively high degree of reliability as the researcher of this study had previously used the rubric to distinguish between the levels of the quality of students’ online postings (under “Learning Outcomes”). The pilot study in the fall of 2010 was used to conduct intra-rater reliability analysis to examine the consistency of scoring. The Pearson r correlation was used to calculate the scoring rubric reliability. The researcher used test-retest grades of online postings for consistency of scores over time which allowed correlating the scores obtained at one point in time with the scores obtained at a later point in time by scoring the same posting twice. After two weeks, all online postings were graded again to calculate intra-rater reliability. The intra-rater reliability was .86. However, there were trends for the mismatch of grading for scores between postings at excellent and good levels on the quality of online posting. This discrepancy was improved by scoring at excellent level only when at least two postings identified important relationships, offered a fresh perspective or critique of a point, and offered supporting evidence.
The Audio Feedback Survey (Ice 2008) The seven-item 5-point Likert scale survey was developed by Ice in 2008 to examine students’ responses to audio and text-based feedback. The researcher obtained permission from the author to use the instrument for this study. The researcher translated the survey items into Russian. Once translated, all items were back translated and verified by another
Methods
63
native speaker of Russian who was a graduate student in the U.S. Then, both English and Russian variants of the survey items were used for this study; all the items were suitable for participants at both levels of language proficiency and required less than five minutes to complete. All survey items were positively worded. The participants were asked to complete the survey which was created using Qualtrics, a web-based survey software available for use by all Purdue faculty, students, and staff. The same survey was administered before and after the online course. Data from the pre and post course survey for this study were analyzed separately. The pre course survey was answered by 19 participants while 55 participants completed the post course survey at the end of the study. The collected data from the post-course survey was entered into PASW (Predictive Analytics Software) Statistics 18.0.2 to run a reliability test. The survey had high internal consistency reliability with a Cronbach’s alpha coefficient reported of .87 with inter-item correlations ranging from .70 down to .32 (Ice 2008).
Data Analysis In this study, the independent variables were the types of feedback delivery method, the instructors’ language background (NNEST and NEST), and participants’ level of language proficiency (high and low). The two groups (NNEST versus NEST) experienced both audio feedback and text-based feedback provided by the NNEST or NEST to measure two dependent variables including (a) the quality of online postings at pretest and after two posttests during the six-week quasi-experiment, and (b) the participants’ perceptions (Figure 3-3).
Figure 3- 3. The data analysis diagram.
64
Chapter Three
All of the participants’ demographic data, weekly discussion postings, and audio feedback survey data were analyzed by using PASW (Predictive Analytics Software) Statistics 18.0.2 and Microsoft Office Excel 2010. As preliminary analysis, descriptive statistics for missing data (participation versus non-participation) including frequencies and percentages were calculated. It should be noted here that the missing data analysis was not planned. However, taking into consideration that non-participation occurred among the students, the researcher decided to run the analysis to examine the type of the students who did not complete the experiment. In order to assess the impact of instructors’ language background and participants’ level of language proficiency on non-participation, logistic regression analyses were performed. Then, overall descriptive statistics including the means, the standard deviations, and the medians of two dependent variables were used to analyze the collected data. In addition, histograms were constructed in order to examine the shape of the distribution of the scores on the weekly posting and audio feedback survey scores.
Research Questions One and Two The first dependent variable was the quality of weekly discussion postings which were measured by assigned numerical scores (1-4) based on the criteria developed by Ertmer and Stepich (2004) in order to answer the first two research questions. The means of the test scores across the three time periods (pretest, posttest 1, and posttest 2) were calculated by the groups of NNEST versus NEST, as well as by the levels of language proficiency. The first research question focused on the main effect while the focus of the second research question was on the interaction effect. Finally, a mixed-effect ANOVA was run to examine the overall effect of both types of feedback as within-subjects factor and the instructors’ language background and the participants’ levels of language proficiency as between-subjects factors.
Research Questions Three and Four The second dependent variable was the participants’ perceptions for embedded audio feedback and text-based feedback measured by the audio feedback survey (Ice 2008) and was used to answer the last two research questions. Frequencies and percentages were calculated for each survey item by overall groups’ results, by instructors’ language background and levels of language proficiency. Then, an independent t-test was employed
Methods
65
to examine the differences between the groups (NNEST versus NEST) and the level of language proficiency (high versus low). The differences were accepted at alpha level of .05 to evaluate statistical significance. Further, a two-way between-groups ANOVA was conducted to examine (a) the main effects of the independent variables (instructors’ language background and participants’ level of language proficiency) and (b) the interaction effect between these two variables. Prior to the analysis, all relevant underlying assumptions for this statistical method, including the homogeneity of variances and sphericity of variance-covariance matrices, were checked to determine if the survey data were suitable for using the two-way betweengroups ANOVA.
Threats to Validity Internal Validity During the quasi-experiment, there were technical issues that prevented students from listening to instructor feedback (e.g. slow Internet connection and software problems), which confounded the effects of the experimental variables. For example, although students did not listen to the audio feedback from the instructors, they continued posting. Therefore, some decreases in students’ scores may be attributed to not being able to listen to instructor feedback. The problems occurred despite the researcher’s having checked connectivity issues and software availability in the computer labs in Russia, and provided the participants an opportunity to adapt to the online environment in the fall of 2010. However, the use of Dropbox (http://www.dropbox.com) and emails helped keep the files and two-way communication flowing during the experimental study. Also, the study used a repeated design to investigate the audio and text feedback difference and control for individual variances which might have accounted for error so as to attain a high level of internal validity. Finally, there was a problem with non-participation of 15 participants at the low level of language proficiency which led to not-atrandom missingness as a threat to internal validity.
External Validity Using volunteers presented an external validity problem because volunteer participants in this quasi-experimental study were atypical of the population of EFL students to which generalizations were made, i.e., participants were bilingual who speak two native languages, Russian and
66
Chapter Three
Yakut (Sakha). Another problem was a reactive effect among participants known as the Hawthorne Effect (Ary et al. 2006, 301). The participants in this study knew that they had been selected for the experiment; and this could have affected the way they responded to the experimental treatment.
CHAPTER FOUR RESULTS
Overview This study aimed to determine if there was a significant difference between the quality of the participants’ weekly discussion postings by types of feedback delivery methods (audio versus text), instructor’s language background (NNEST versus NEST), and participants’ level of language proficiency (high versus low) when they participated in a sixweek quasi-experimental study in spring 2011. In addition, this study sought to examine possible differences between participants’ perceptions on audio and text-based feedback by instructor’s language background as well as by participants’ level of language proficiency. The findings are reported in the order of the following research questions: RQ1: Is there a significant difference in scores on the quality of weekly discussion posting by types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency? RQ2: Is there any interaction effect between the types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency on the scores on the quality of weekly discussion posting? RQ3: Is there a significant difference in scores on perceptions of the type of feedback delivery method by instructor’s language background and/or student’s level of language proficiency? RQ4: Is there any interaction effect between instructor’s language background and student’s level of language proficiency scores on perceptions of the type of feedback delivery method?
68
Chapter Four
Missing Data Analysis However, taking into account that out of 69 participants who were involved in the quasi-experimental study at the beginning of the semester, 15 did not complete the online course. Therefore, the purpose of the missing data analysis was to reveal the reasons and the type of the students who dropped the online course. Figure 4-1 shows the results for missingness as functions of the participants’ level of language proficiency and the instructors’ language background. Seven participants, two at the high level of language proficiency and five at the low level of language proficiency in the NEST’s group, did not complete the course. Eight participants at the low level of language proficiency in the NNEST’s group did not complete the course. Therefore, it is obvious that the majority (86.7%) of those who did not complete the course were from low language proficiency levels.
Figure 4- 1.The frequency of the students who did not complete the course by the instructors’ language background and levels of language proficiency.
Table 4-1 shows that out of the 54 who completed the course, 29 participants were in the NEST’s group, while 25 completed the course in the NNEST’s group. In addition, out of 29 in the NEST’s group, 14 were at the high level of language proficiency while 15 were at the low level of language proficiency. Finally, out of 25 in the NNEST’s group, 15 were at the high level while only 10 completed the course at the low level of language proficiency.
Results
69
Table 4- 1 Results for Participation in Online Course across the NNEST/NEST Groups and Participants’ Language Proficiency Level
Language
Low Proficiency High Proficiency
Total
Instructors NNEST NEST 10 15
Total 25
15
14
29
25
29
54
The analysis of missing data revealed that no one participated only during the weeks when audio feedback was provided (Table 4-2). However, one participated only when text-based feedback was provided. This participant was from the NNEST’s group at the low level of language proficiency. Next, two participants, one from the NEST’s group and one from the NNEST’s group at the low level of language proficiency never joined the discussion when either text-based feedback or audio feedback was provided. Twelve participants posted occasionally. Table 4- 2 Results for Non-Participation by Instructor’s Language Background and Participants’ Language Level
AF TF Neither AF nor TF Not Consistent Total
Low High Low High Low High Low High
Instructors NNEST 1 1 6 8
NEST 1 4 2 7
Total 1 2 10 2 15
Logistic regression analysis was performed to further assess the impact of the participants’ level of language proficiency and the instructors’ language background on non-participation during the course. The model explained 12.1% (Cox & Snell R Square) or 18.6% (Nagelkerke R Square) of the variance in participation status, and it correctly classified 78.3% of
Chapter Four
70
the cases. Table 4-3 shows that the participants’ level of language proficiency made a unique statistically significant contribution to the model, recording an odds ratio 7.63 (Wald = 6.308; p=.012). This indicates that probability of non-participation will be about 7.6 times higher for participants at the low level of language proficiency in comparison with the high level of language proficiency. Table 4- 3 Results of Logistic Regression for Non-Participation
Instructor Proficiency Constant
B -.337 2.032 -2.513
S.E. .622 .809 .782
Wald .293 6.308 10.328
1 1 1
df Sig. .588 .012 .001
Exp(B) .714 7.626 .081
Additional logistic regression analysis was performed to assess the impact of the participants’ total TOEFL score on non-participation during the course (Table 4-4). The odds ratio of .980 for the TOEFL score was less than 1, which means that in a point increase in the TOEFL score will result in 1.02 times higher the probability of participation. For example, if the TOEFL scores were 487 and 477 respectively, the probability of participation will be increased ten times, i.e., the difference of 10 points between 487 and 477. So, as the TOEFL scores increases, the participation increases as well. Table 4- 4 Results of Logistic Regression for Non-Participation based on the TOEFL Score TOEFL Score Constant
B -.020 8.766
S.E. .008 3.904
Wald 6.437 5.041
df 1 1
Sig. Exp(B) .011 .980 .025 6409.629
Results for Research Questions One and Two The first two research questions examined possible differences by the types of feedback delivery methods (audio versus text), instructor’s language background (NNEST versus NEST), and participants’ level of language proficiency (high versus low) as well as the interaction effect among these factors on the scores on the quality of the weekly discussion postings. The weekly discussion posting was examined on the assigned numerical score (1-4). The data were analyzed using descriptive statistics
Results
71
and a mixed-effect ANOVA with three-time points as a within factor, instructor’s language background and participants’ language proficiency level as between factors for each type of feedback delivery method. Because of small sample size (n=54), the inferential analysis was run to examine possible differences by each type of feedback delivery method (audio versus text) separately by the levels of language proficiency (high versus low) and by instructor’s language background (NNEST versus NEST).
Descriptive Results Figure 4-2 presents the overall mean score change on the quality of online posting for both types of feedback delivery method across the three time periods (pretest, posttest 1, and posttest 2). It is obvious that the participants tend to increase the scores on the quality of online postings for both delivery methods of feedback.
Figure 4- 2. Means scores change in quality of posting by the type of feedback.
Furthermore, this study examined the effects of the instructors’ language background (NNEST versus NEST) and participants’ level of language proficiency (high versus low) for each type of feedback delivery method separately (audio versus text) to see if there were possible differences in changes of scores over time by these factors.
72
Chapter Four
Results by Instructors’ Language Background and the Type of Feedback Delivery Method The results indicated a consistent increase in scores on the quality of online posting for both groups (NNEST versus NEST) across all three time periods when audio feedback and text-based feedback were provided. The graphs below show that the NNEST’s group showed higher average posting scores than the NEST’s group for both types of feedback delivery methods (Figure 4-3). However, it was observed that the discrepancy between two groups was larger for audio feedback than that for text-based feedback. The discrepancy was the largest at posttest 2 when audio feedback was provided. However, the trend was not observed for textbased feedback, which implies the possible interaction effects between the type of delivery methods and the instructors’ language background. A mixed-effect ANOVA with the instructors’ language background as a between factor and three-time points as a within factor for each delivery method was performed. The results of Mauchly’s Test of Sphericity indicated that the assumption of sphericity was not violated for audio feedback (W = .98, p = .62) and text-based feedback (W = .97, p = .49). Therefore, it is reasonable that the variance-covariance among three time points is similar for both instructors (NNEST and NEST). For the main effect of time, the results indicate that the scores changed over time for both types of feedback delivery methods accordingly for audio feedback (F [2, 104] = 13.52, p < .05) with large effect size (ߕ²=.21) and for text-based feedback (F [2, 104] = 29.58, p < .05) with large effect size (ߕ²=.36). This means that both groups increased the quality of the online posting scores across the three time periods when they received both types of feedback as summarized in Table 4-5. As for the main effect for group (instructors’ language background) for audio feedback, the results showed there was a significant difference (F [1, 52] = 10.05, p =.006) between the groups with moderate to large effect size (ߕ²=.16). However, for text-based feedback, there was no significant difference (F [1, 52] = 3.27, p = .08) in the quality of online postings between the NEST’s group and the NNEST’s group.
Results
73
Figure 4- 3. Means scores change in quality of posting by the type of feedback and instructors’ language background.
Chapter Four
74
For the interaction effect between time and the types of instructors’ language background, the results indicated there was no significant interaction between the change in scores over time and instructors’ language background for audio feedback (F [2, 104] = 1.47, p = .24) and text-based feedback (F [2, 104] = .31, p = .73) meaning that the pattern of the change in scores overtime is similar for both instructors (NNEST and NEST). Although, Figure 4-3 above implies that there were possible interaction effects between the type of delivery methods and the instructors’ language background, ANOVA analysis did not support the observation in the descriptive results. Table 4-5 Quality of Online Posting Scores for NEST and NNEST across Three Time Periods NEST (n=29)
Text-Based Feedback Audio Feedback
Pretest
Posttest1 M(SD)
Posttest 2 M(SD)
M(SD) 1.83 (.54) 1.83 (.54)
2.21 (.62) 2.17 (.66)
2.62 (.73) 2.28 (.96)
NNEST (n=25) Pretest M(SD)
Posttest 1 M(SD)
Posttest 2 M(SD)
2.12 (.53) 2.12 (.53)
2.36 (.64) 2.60 (.96)
2.92 (.91) 3.00 (.91)
Results by the Level of Language Proficiency and the Type of Feedback Delivery Method The results for the high and low levels of language proficiency also showed increasing trends in the scores on the quality of online postings across all three time periods for both types of feedback delivery methods (Figure 4-4). It is obvious that the participants at the high level of language proficiency showed higher average posting scores than the participants at the low level of language proficiency. However, the results of the first posttest showed that the low level of language proficiency outperformed the high level of language proficiency. Nevertheless, the results of the pretest and the second posttest revealed that the low level of language proficiency showed lower average posting scores than the high level of language proficiency. Therefore, the discrepancy between the levels of language proficiency was larger for audio feedback than that for text-based feedback. Figure 4-4 below shows that the discrepancy between the two levels of language proficiency was larger for audio feedback than for the
Results
75
text-based feedback at posttest 2. This may imply that there were possible interaction effects between the type of delivery method and the levels of language proficiency. A mixed-effect ANOVA with the level of language proficiency as a between factor and three-time points as a within factor for each type of feedback delivery method was performed. Mauchly’s Test of Sphericity for the assumption of sphericity was not violated for audio feedback (W = .98, p = .55) and for text-based feedback (W = .97, p = .46). Therefore, the variance-covariance among the three time points is similar for both levels of language proficiency. For the main effect of time, the results indicate that the scores changed over time for both types of feedback delivery methods accordingly for audio feedback (F [2, 104] = 13.03, p < .05) with large effect size (ߕ²=.20) and for text-based feedback (F [2, 104] = 29.72, p < .05) with large effect size (ߕ²=.36). This means that both levels of language proficiency increased the quality of online posting scores across the three time periods when they received both types of feedback as summarized in Table 4-6. As for the main effect for group (the levels of language proficiency), the results did not reveal any significant difference in the quality of online postings and levels of language proficiency for audio feedback (F [1, 52] = 2.65, p = .11) and for text-based feedback (F [1, 52] = .28, p = .60). However, for the interaction effect between time and the levels of language proficiency, the results indicated that for audio feedback there was a significant interaction effect between the change in scores and participants’ level of language (F [2, 104] = 4.73, p = .01). This means that the pattern of the change in scores over time was different for both language proficiency levels when participants received audio feedback. As for text-based feedback there was no significant interaction between the change in scores over time and language proficiency levels (F [2, 104] = .82, p = .45). It means that the levels were not different when receiving text-based feedback. Figure 4-4 depicts the results of ANOVA analysis for significant interaction effect for audio feedback when the participants at the low level of language proficiency scored lower at posttest 2 than the participants at the high level of language proficiency.
76
Chapter Four
Figure 4-4. Means scores change in quality of posting by the type of feedback and the levels of language proficiency.
Results
77
To summarize, both types of feedback delivery methods (audio feedback and text-based feedback) showed increasing trends over time meaning that the participants increased the quality of their online postings. However, for audio feedback, there was significant difference between the groups (NNEST and NEST) with a moderate to large effect size (ߕ²=.16). It may tell that 16% of the change in the quality of online posting can be accounted by the instructors’ language background. Moreover, there was a significant interaction effect between the level of language proficiency and the type of delivery method meaning that the patterns of the changes in scores were different for the levels of language proficiency. Table 4- 6 Quality of Online Posting Scores for High and Low Level of Language Proficiency across Three Time Periods High (n=29)
Text-Based Feedback Audio Feedback
Pretest
Posttest1 M(SD)
Posttest 2 M(SD)
M(SD) 2.03 (.57) 2/03 (.57)
2.24 (.51) 2.34 (.81)
2.83 (.93) 2.93 (.80)
Low (n=25) Pretest M(SD)
Posttest 1 M(SD)
Posttest 2 M(SD)
1.88 (.53) 1.88 (.53)
2.32 (.75) 2.40 (.87)
2.68 (.69) 2.24 (1.09)
As mentioned before, due to the resultant small sample size, the mixed analysis with two within and two between factors was not plausible to conduct. Therefore, alternatively, a mixed-effect ANOVA was run to assess the interaction of the group factor with the instructors’ language background and the participants’ levels of language proficiency as between-subjects factors. To run this analysis, two posttests’ scores for each delivery method were averaged to examine the effect of feedback delivery method for each participant and treated this factor as a within factor. Therefore, the researcher focused on reporting the interaction effect of the type of feedback delivery method by instructors’ language background (F [2, 100] = 1.21, p = .30) and levels of language proficiency (F [2, 100] = .73, p = .48). Regardless of the type of feedback delivery method, this means that the pattern of the change in scores is similar for both instructors (NNEST and NEST) as well as for high and low level of language proficiency (Figure 4-5).
78
Chapter Four
Figure 4-5. Means scores change by instructors’ language background and by the level of language proficiency for text feedback and audio feedback.
Results for Research Question Three The third research question examined the participants’ perceptions by instructors’ language background (NNEST versus NEST) and participants’ level of language proficiency (high versus low). The audio feedback survey (Ice 2008) was used to measure the participants’ perceptions on
Results
79
audio and text-based feedback by instructors’ language background and participants’ level of language proficiency. The researcher administered pre and post course surveys on the perceptions for the type of feedback (audio versus text-based) in order to examine differences between the NNEST/NEST’s groups before and after the study. However, only 19 participants out of total 69 enrolled in the course completed the pre course survey at the beginning of the semester. Thus, because of the small sample size (n=19) and taking into account that the research questions of this study did not aim to investigate the differences in perceptions before and after the quasi-experiment, the information for the audio feedback survey (Ice 2008) is given in this section only to understand whether the participants’ perceptions on the type of feedback delivery method changed before and after the online course. The descriptive statistics of the pre and post course survey data are shown in Table 4-7. The means, SDs, and medians were calculated to measure the central tendency of the pre and post course survey scores. Given the small sample size on the pre course survey (n=19), nonparametric statistics were conducted for this portion of the study. The results of a non-parametric statistics of a Wilcoxon Signed Rank Test revealed a statistically significant difference on perceptions on the type of feedback during the study, z = -1.99, p = .047. The median score on the perception for the type of feedback increased from pre course survey (Md =3.71) to post course survey (Md = 4.00). Table 4-7 Pre and Post Course Survey Results (n=19) NNEST/NEST Groups Pre
N
M
SD
Md
19
3.22
.99
3.71
Post
19
3.89
.90
4.00
Range .005.00 .005.00
z -1.99
p .047
* Significance level at .05
Overall Groups Post Course Survey Results Given the small sample size on the pre course survey (n=19), to answer the research question whether there was a significant difference in scores on perceptions by instructor’s language background and/or level of language proficiency, the results of post course survey are reported in this
80
Chapter Four
section. The results in Table 4-8 indicate that the majority of participants tend to choose “agree” for the preferences of receiving audio feedback rather than text-based feedback (M=3.63) according to the scale ranging from (1=Strongly Disagree) to (5=Strongly Agree). It is obvious that the participants chose receiving audio feedback to text-based feedback because audio feedback is clear (item #1) and personal (item #6). Table 48 shows that the items on the clarity of instructor’s voice (item #1) and personality (item #6) had the highest means across all the items, meaning that the majority agreed receiving audio feedback than text-based feedback (M=3.98 and M=3.85). However, the responses on the items on motivation (item #4) and retention (item #5) had the lowest means across all the items. This possibly means that the majority of the participants who were nonnative-English-speakers tend to neither agree nor agree that audio feedback motivated and helped them retain information better than textbased feedback. The results by each individual survey items are given in Appendix D. Table 4-8 Results of Survey Individual Items (n=55) Survey Item
M
SD
Md
#1. When using audio feedback, inflection in the instructor’s voice made his/her intent clear.
3.98
1.25
4.00
#2. The instructor’s intent was clearer when using audio than text. #3. Audio comments made me feel more involved in the course than text based comments.
3.73
0.95
4.00
3.65
1.11
4.00
#4. Audio comments motivated me more than text based comments.
3.27
1.31
3.00
#5. I retained audio comments better than text based comments. #6. Audio comments are more personal than text based comments.
3.16
1.30
3.00
3.85
1.16
4.00
#7. Receiving audio comments made me feel as if the instructor cared more about me and my work than when I received text based comments Overall score
3.76
1.15
4.00
3.63
0.88
4.00
Results
81
Overall Results by Instructors’ Language Background Figure 4-6 shows the histogram for the overall perception about the type of feedback delivery method for all participants. The scores were averaged for the comments on audio feedback and text-based feedback for all participants. The scores on overall perception were normally distributed with most of the scores occurring in the center (3.70 versus 3.57; skewness= - 0.43 versus -0.28; kurtosis= - 0.52 versus - 0.74). The ratings ranged from 1 as strongly disagree to 5 as strongly agree. The histograms for the perception on the type of feedback delivery method in Figure 9 indicate that the NEST’s group had more variability (0.94 versus 0.81) with regard to participants’ perception on feedback delivery method than the NNEST’s group.
Figure 4-6. Frequency distributions of the average audio feedback perception by instructors’ language background.
Table 4-9 illustrates the results for perceptions in the NNEST’s group and NEST’s group. The final results indicate that the participants in the NNEST’s group responded to the seven questions with a mean of 3.70
Chapter Four
82
while the mean for the seven questions in the NEST’s group was 3.57. This possibly means that the participants within the NNEST’s group had higher level of perception of audio feedback than the NEST’s group. Standard deviations for the NEST’s group were .94, and .81 for the NNEST’s group. An independent t-test revealed no significant difference in the perceptions on the type of feedback delivery method between the NEST’s group (M = 3.57, n = 30) and the NNEST’s group (M = 3.70, n = 25), p = .59. Similar to the overall results, the items on clarity of instructional voice (item #1) and personality of feedback (item #6) had very high scores in the NNEST group meaning that the participants who were nonnative-English speaking students perceived higher clarity of voice (M=4.28) and personality (M=4.08) when they received audio feedback from the instructor who was a nonnative-English speaker as well. In addition, the responses on the motivation (item #4) and retention (item #5) also had the lowest means across all the items for both instructors but these items were higher for the NEST than for the NNEST meaning that the nonnative-English-speaking students perceived higher motivation and retention when receiving audio feedback from the instructor who was a native speaker of English. Table 4-9 Results of Survey Items by Instructors’ Language Background NEST (n=30) Survey Item #1. When using audio feedback, inflection in the instructor’s voice made his/her intent clear. #2. The instructor’s intent was clearer when using audio than text. #3. Audio comments made me feel more involved in the course than text based comments.
NNEST (n=25)
M
SD
Md
M
SD
Md
3.73
1.46
4.00
4.28
.89
5.00
3.67
.99
4.00
3.80
.91
4.00
3.57
1.07
3.00
3.76
1.17
4.00
Results
83
Table 4-9 Cont. NEST (n=30) Survey Item #4. Audio comments motivated me more than text based comments. #5. I retained audio comments better than text based comments. #6. Audio comments are more personal than text based comments. #7. Receiving audio comments made me feel as if the instructor cared more about me and my work than when I received text based comments Overall score average
NNEST (n=25)
M
SD
Md
M
SD
Md
3.47
1.28
4.00
3.04
1.34
3.00
3.27
1.34
4.00
3.04
1.27
3.00
3.67
1.15
4.00
4.08
1.15
4.50
3.63
1.22
4.00
3.92
1.08
4.00
3.57
.94
3.00
3.70
.81
4.00
Overall Results by Participants’ Level of Language Proficiency Table 4-10 indicates the results for perceptions on the type of feedback delivery method based on the level of language proficiency (high versus low). The scores were averaged for the comments on audio and text-based feedback for all participants. Figure 4-7 presents the histogram for the perception on the type of feedback delivery method by participants’ level of language proficiency where the scores were normally distributed with most of the scores occurring in the center (3.63 versus 3.63; skewness= 0.48 versus -0.23; kurtosis= - 0.22 versus - 0.34). The ratings ranged from 1 as strongly disagree to 5 as strongly agree. The histogram shows that the low level of language proficiency had more variability (0.90 versus 0.88)
84
Chapter Four
with regard to participants’ perception on feedback delivery method than the high level of language proficiency.
Figure 4-7. Frequency distributions of the average audio feedback perception by participants’ level of language proficiency.
The final results revealed that the participants at both levels of language proficiency rated the survey items similarly (M=3.63). This possibly means that the participants at high and low levels of language proficiency had similar perceptions on the type of feedback delivery method. According to the scale ranging from (1=Strongly Disagree) to (5Strongly Agree), the participants at both levels of language proficiency tend to prefer receiving audio feedback rather than text-based feedback. Standard deviations for the high level of language proficiency were .88 and .90 for the low level of language proficiency. An independent t-test revealed no significant difference in the feedback perception for the high level of language proficiency (M = 3.63, n = 27) and the low level of language proficiency (M = 3.63, n = 28), p = .99. For the participants at the high level of language proficiency, the mean rating for the item #1 on clarity of the instructor’s voice (M=4.19), the
Results
85
item #3 on feeling of involvement in the course (M=3.70), and the item #4 on motivation (M=3.33) were higher than those at the low level of language proficiency. On the contrary, the participants at the low level of language proficiency rated higher the item #2 on the clarity of the instructor’s intent (M=3.79), the item #5 on retention (M=3.18), the item #6 on the personality (M=3.93), and the item #7 on the instructor’s care (M=3.93) than the participants at the high level of language proficiency. Thus, although there were some differences in perception for specific items, the language proficiency of students is not related to the overall perception of audio feedback. Table 4-10 Results of Survey Items by Participants’ Levels of Language Proficiency High (n=27) Survey Item #1. When using audio feedback, inflection in the instructor’s voice made his/her intent clear. #2. The instructor’s intent was clearer when using audio than text. #3. Audio comments made me feel more involved in the course than text based comments. #4. Audio comments motivated me more than text based comments. #5. I retained audio comments better than text based comments. #6. Audio comments are more personal than text based comments. #7. Receiving audio comments made me feel as if the instructor cared more about me and my work than when I received text based comments Overall score average
Low (n=28)
M 4.19
SD 1.04
Md 5.00
M 3.79
SD 1.42
Md 4.00
3.67
0.92
4.00
3.79
0.10
4.00
3.70
1.03
4.00
3.61
1.20
4.00
3.33
1.36
3.00
3.21
1.29
3.00
3.15
1.20
3.00
3.18
1.42
3.00
3.78
1.19
4.00
3.93
1.15
4.00
3.59
1.15
3.00
3.93
1.15
4.00
3.63
0.88
4.00
3.63
0.90
4.00
Chapter Four
86
Results for Research Question Four In order to answer the last research question of whether or not there was a significant interaction effect on scores for the participants’ perception on audio and text-based feedback by instructors’ language background and participants’ level of language proficiency, two-way between-groups ANOVA on the survey scores was conducted. Table 4-11 shows that the main effect for group by instructors’ language background was not statistically significant, F [1, 51] = .31, p = .58. The main effect for group by participants’ level of language proficiency was also not statistically significant, F [1, 51] = .03, p = .87. The results also indicated that there was no statistically significant interaction effect between the instructors’ language background and participants’ level of language proficiency, F [1, 51] = .92, p = .34 (Figure 4-8). Table 4-11 Two-Way Between-Groups ANOVA: Effect of Instructors’ Language Background and Participants’ Levels of Language Proficiency Source Instructors Proficiency Instructors Proficiency
*
Type III Sum of Squares 0.25 0.02
df
Mean Square
F
p
1 1
0.25 0.02
0.31 0.03
0.58 0.87
0.74
1
0.74
0.92
0.34
Results
87
Figure 4- 8. Plot of interaction between levels of language proficiency and instructors’ language background on participants’ perceptions.
CHAPTER FIVE DISCUSSION, IMPLICATIONS, RECOMMENDATIONS, FUTURE RESEARCH, LIMITATIONS, AND CONCLUSION
The foundation of this study was to examine the effectiveness of asynchronous embedded audio feedback for EFL students. Even though several studies examined the effectiveness of audio feedback both in faceto-face and online environments, limited research has been conducted on the effects of audio feedback on EFL students in asynchronous online environments. There is still a lack of evidence that shows if audio feedback specifically embedded audio feedback can be an effective technique to promote EFL students’ higher-order thinking when they participate in asynchronous online discussions. Therefore, the purpose of this study was to examine the effect of asynchronous embedded audio feedback on EFL students’ higher-order learning as well as perceptions for type of feedback delivery methods (audio feedback versus text-based feedback). The study examined the changes in the quality of EFL students’ weekly discussion postings over time and their perceptions on the type of feedback delivery method by the language background of the instructor who provided the feedback (NNEST versus NEST) and by the students’ level of English language proficiency (high versus low). The quasi-experiment with one within-subject and two betweensubjects main factors was conducted. Participants of this quasi-experiment were 69 EFL volunteers from the International Relations Program at North-Eastern Federal University in Russia enrolled in an asynchronous online course delivered in English. The independent variables were the level of the participants’ language proficiency (high versus low), the type of feedback delivery method (audio versus text), and the language background of the instructors who provided feedback (NNEST versus NEST). The dependent variables were the scores on the quality of weekly discussion postings and the participants’ perceptions of the type of feedback (audio versus text).
90
Chapter Five
Having been guided by social constructivist pedagogical theory, the results of this study found evidence that the instructional feedback plays an important role in helping EFL students construct their own reality or knowledge (Driscoll 1999, 409; Jonassen 1994, 37). Therefore, feedback in a constructivist learning environment (asynchronous online discussions) context can enhance EFL students’ higher-order learning when they are engaged in a dialogue and interacted verbally with others to construct meaning (Mory 2004, 745; Pear and Crone-Todd 2002, 221).
Research Questions One and Two The first two research questions were, “Is there a significant difference in scores on the quality of weekly discussion posting by types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency? and “Is there any interaction effect between the types of feedback delivery methods, instructor’s language background, and/or student’s level of language proficiency on the scores on the quality of weekly discussion posting?” In order to answer the first two questions all volunteers from both groups (NNEST and NEST) at the high and low levels of language proficiency participated in a six-week online course where they discussed the weekly articles provided in English. The participants read the articles and answered the weekly questions by sending 1-2 online discussion postings in English. Then, the weekly online postings were evaluated during the pretest and the two posttests using the 4-point scoring rubric during the six weeks of the experiment (Ertmer and Stepich 2004, under “Learning Outcomes”). The scoring rubric was used to measure the quality of the participants’ weekly online postings. The criteria for posting measured the timeliness and quantity of discussion responses as well as the responsiveness to discussion topics and the demonstration of knowledge and understanding gained from assigned readings and the ability of postings to move the discussion forward. The data from the participants’ online discussion postings were analyzed using a mixedeffect ANOVA. First, data were analyzed using the mixed-effect ANOVA with the three-time points (pretest, posttest 1 and posttest 2) as a within factor and the instructor’s language backgrounds and the participants’ language proficiency level as between factors for each feedback delivery method separately. Then, the mixed-effect ANOVA was used with both feedback delivery methods as a within factor and the instructor’s language background and the participants’ language proficiency level as between factors.
Conclusions
91
The results of the study showed the effectiveness of both types of feedback in the asynchronous online discussions for the EFL students but the effectiveness may vary by instructor and students’ level of language proficiency. This study also showed some evidence of the interaction between audio feedback and the level of language proficiency. Then, this study found evidence that the EFL students at the high and low levels of language proficiency can reach a higher level in the quality of online posting when they received both types of feedback from the NNEST and NEST. Overall, the quality of the students’ online postings during the sixweek quasi-experimental study averaged 2.80 on a 4-point scoring rubric at the end of the online course compared to 1.60 at the beginning of the course. The findings support previous studies that asynchronous online discussions could promote higher order learning among EFL students when they receive constructive instructional feedback with guidance and critical reflection (Biesenbach-Lucas 2003, 39; Swan 2003, 25). However, this study found evidence that the low level of language proficiency could prevent the EFL students from participating in the online courses provided in English because of their linguistic limitations or language barriers (Biesenbach-Lucas 2003, 25-26; Gunawardena and McIsaac 2004, 384-85). In this study, out of a total of 15 who did not complete the course, 13 were at the low level of English proficiency. Further analysis found a significant interaction effect between the change in the scores and the levels of language proficiency when the EFL students received audio feedback. Therefore, the findings supported evidence that language skill seems to be one of the most determining skills for participation in an asynchronous online discussion (Black 2005, 10; Zhang and Kenny 2010, 27). The finding is consistent with those of other researchers who examined EFL students’ language proficiency in online environments explaining that EFL students at the low level of language proficiency might have problems understanding audio feedback because of their linguistic limitations (Biesenbach-Lucas 2003, 39; Yang, and Richardson 2011b, 75; Shih and Cifuentes 2003, 88). The findings of this study revealed significant differences between the groups of the NNEST and NEST when embedded audio feedback was delivered. In addition, there was significant interaction effect between the level of language proficiency and embedded audio feedback on the quality of online postings. The descriptive analysis showed that the NEST’s group scored consistently lower than the NNEST’s group over the three time periods when receiving audio feedback. In addition, the quality of online posting among the participants in the NNEST’s group averaged 3.00 on a 4-point scoring rubric at the end of the online course. According to the
92
Chapter Five
scoring rubric, three points were assigned when the students incorporated the weekly readings into the discussion as related to the topic, provided more concrete examples from their own experience, and described more specific implications to the postings for which they had received audio feedback. The differences between the instructors could be explained by the same ethnicity, native language, and cultural background of the participants and the NNEST (Olesova et al. 2011a, 30). It seems that the participants could possibly understand and comprehend the NNEST better than the participants in the NEST’s group. According to the researcher’s observations, the audio feedback provided by the NNEST was longer with more in-depth details of how the participants could contribute to the online discussion compared to the audio feedback from the NEST. Nevertheless, the quality of audio feedback from both instructors was the same because both followed the use of feedback within the constructivist philosophy; both instructors negotiated how to provide the weekly feedback and what they needed to provide (Jonassen 1991, 7). Thus, it is assumed that some factors could possibly affect the quality of online posting specifically, when the participants received the audio feedback. The experimental study required submission of one initial posting and at least two responses to others. However, additional responses did not affect overall weekly score and the participants did not receive a higher grade because they were not encouraged to complete additional postings. Next, the reason of the low quality of the online postings at the low level of language proficiency could relate to the participants’ unknown vocabulary (e.g. the idiom used “the line in the sand”), unfamiliar reality described in the texts specifically for those who have never been abroad (American system of education and job employment process), not enough working experience in business area to reflect on the weekly questions using own examples, and the type of questions asked (e.g., would you forgo financial aid (if you’re on the fence) if you think it could boost your child’s admissions chances?). Finally, the reason of why the low level of language proficiency scored lower when they received the audio feedback could relate to the scoring rubric requirements. The rubric required the postings well distributed throughout the week but some participants at the low level of language proficiency submitted only one online posting which decreased their overall weekly score. In summary, audio feedback and text-based feedback can promote higher-order thinking among EFL students when they participate in asynchronous online discussions. The findings revealed that the groups of the NNEST and NEST were different when the participants received
Conclusions
93
embedded audio feedback. There was significant interaction effect between the level of language proficiency and audio feedback on the quality of online postings. Thus, this finding indicates that the level of language proficiency plays an important role in increasing the quality of online posting when providing audio feedback for EFL students.
Research Questions Three and Four The last two research questions were, “Is there a significant difference in scores on perceptions of the type of feedback delivery method by instructor’s language background and/or student’s level of language proficiency?” and “Is there any interaction effect between instructor’s language background and student’s level of language proficiency scores on perceptions of the type of feedback delivery method?” In order to answer those last two research questions participants’ responses on the audio feedback survey were collected and analyzed. At the beginning of the study, it intended to collect survey data before and after the quasiexperiment to examine the differences on the participants’ perceptions of the type of feedback delivery methods (audio versus text). But because of the low return rate on the audio feedback survey at the beginning of the course (n=19), this section discusses the major findings only from the audio feedback survey (Ice 2008) collected at the end of the online course (n=55). The participants’ responses were scored using the seven-item 5-point Likert scale (1=Strongly Disagree to 5=Strongly Agree) survey developed by Ice (2008) to examine students’ responses to audio and text-based feedback. The survey asked about perceptions for audio or text-based feedback in relation to clarity of instructional intent, clarity of instructional voice, motivation, retention, personalization, feeling of involvement, and feeling of instructional care. First, the responses were analyzed by descriptive statistical analyses using frequencies of numbers. Then, an independent t-test was used to examine the differences between the groups (NNEST versus NEST) at the high and low levels of language proficiency. Finally, a two-way between-groups ANOVA was run to examine the main and interaction effect for independent variables including instructors’ language background (NNEST versus NEST) and participants’ levels of language proficiency (high and low) on perceptions for audio feedback and text-based feedback. The findings from the survey overall corroborated previous studies indicating that EFL students in this study also preferred receiving asynchronous embedded audio feedback over text-based comments (M=3.63,
94
Chapter Five
SD=0.94) but their perceptions may vary by level of language proficiency (high and low) and instructors’ language background (NNEST and NEST) (Huang 2000, 228; Ice et al. 2007, 3; Ice 2008; Ice et al. 2008, under “Analysis and Conclusions”). The findings from the survey by instructors’ language background and participants’ level of language proficiency also demonstrated that overall the participants in the NEST’s group (M=3.57, SD=0.94) and the NNEST’s group (M=3.70, SD=0.81) at the high level of language proficiency (M = 3.63, SD=0.88) and the low level of language proficiency (M=3.63, SD=0.90) preferred receiving audio feedback over text-based comments. Furthermore, the descriptive analysis of participants’ responses presented the evidence that the participants in the NNEST’s group rated their perception for audio feedback over text-based comments higher than the participants in the NEST’s group. Interestingly, the survey responses provided an unexpected finding on the participants at the low level of language proficiency. Despite course dropout and nonparticipation, the participants at the low level of language proficiency rated perceptions for receiving audio feedback over text-based comments similar to the participants at the high level of language proficiency. In addition, responses on the questions whether audio feedback motivated them and if they retained audio comments better than text-based feedback achieved the lowest scores consistently across the groups of NNEST and NEST at both levels of language proficiency. Although no statistical differences were revealed between the NNEST/NEST’s groups and levels of language proficiency on perceptions for the type of feedback delivery method among EFL students, it is believed, as is argued in the publication Effective Practice in a Digital Age (2009), that embedded audio feedback was perceived positively by EFL students in both groups (NNEST and NEST) and at both levels of language proficiency (high and low). Further, this chapter discusses the results by the instructors’ language background and participants’ level of language proficiency while describing the findings on the survey items, i.e., clarity, involvement, motivation, retention, personalization, and instructor’s care.
Clarity The majority in both groups (NNEST and NEST) at the high and low levels of language proficiency positively rated both survey items on clarity. This finding supports previous studies on audio feedback, suggesting that EFL students prefer audio comments because they are clearer and more understandable than written comments (Boswood and Dwyer 1995; Carson and McTasney 1978; Clark 1981; Cuthrell, Fogarty,
Conclusions
95
and Anderson 2009; Farnsworth 1974; Ice 2008; Merry and Orsmond 2007; Nortcliffe and Middleton 2008; Orsmond, Merry, and Reiling 2005; Roberts 2008; Rotheram 2007; Sommers 2002). These results support recent literature suggesting that students preferred asynchronous audio feedback as compared to traditional text-based comments because of understanding nuances and clarity of meaning which is very important for communication in an asynchronous online environment (Ice et al. 2007, 3; Rodway-Dyer, Dunne, and Newcombe 2009, 68; Swan 2003, 25). These findings are also consistent with recent literature on audio feedback indicating that students preferred audio feedback for better understanding of how to improve their work and for providing greater detail than written comments (Huang 2000, 228; Ice 2008, under “slide 16”; Morra and Asís 2009, 77; Oomen-Early et al. 2008; Orsmond, Merry, and Reiling 2005, 370). Therefore, audio feedback could be an effective technique when it is provided by NNEST and NEST and for both levels of language proficiency in asynchronous online environments. In this sense, audio feedback can overcome the lack of clarity in text-based communication among EFL students, especially when they communicate with their instructor who is a native speaker of English (Ice et al. 2007, 19). However, findings from both clarity items (item #1 and item #2) also showed that the students in the NEST’s group rated both items lower than the NNEST’s group. Therefore, the low rating of the NEST may corroborate with findings from previous studies (Zhang and Kenny 2010, 17) indicating that EFL students might have experienced problems understanding their instructor who was a native speaker of English. Interestingly, the participants at the low level of language proficiency rated the item on the clarity of the instructor’s intent (item #2) higher than the participants at the high level of language proficiency. One possible reason for this is that audio feedback can assist EFL students in overcoming drawbacks of text-based feedback to provide clarity of meaning for EFL students, especially at the low level of language proficiency. These findings can be compared to Price and Holman’s (1996) investigation of minority Hispanic students in the U.S. The researchers found that Spanish-speaking bilingual students responded more enthusiastically to the taped feedback than Anglo students did.
Involvement, Personalization and Instructor’s Care The results of this study showed that the majority of the EFL students preferred audio feedback because it is personal and it makes them feel more involved in the online course. The findings of this study support the
96
Chapter Five
literature that indicates audio feedback can increase students’ feelings of involvement in the online courses and the more personalized communication with their instructors. They prefer audio feedback because it is more personal than text-based comments and audio feedback can increase the feeling of the instructor’s concern for the students (Anson 1997; France and Wheeler 2007; Harris 1970; Hsu, Wang, and Comac 2008, 192; Ice et al. 2007; Ice et al. 2008; McGrew 1969; Moore 1977; Oomen-Early et al. 2008; Orsmond, Merry, and Reiling 2005; Roberts 2008; Sipple 2007; Sommers 1989). Interestingly, about 70% of the participants at the low level of language proficiency rated the item on instructor’s care (item #7) much higher than the participants at the high level of language proficiency. So, it seems that audio feedback could be effective for the EFL students at the low level of language proficiency in creating the feeling of the instructor’s care when receiving audio comments.
Motivation and Retention Overall, the EFL students rated the motivation and retention items positively; the findings may support previous studies revealing that audio feedback helped to enhance students’ motivation and retention (Cryer and Kaikumba 1987; Huang 2000; Ice 2008; Oomen-Early et al. 2008; Orsmond, Merry, and Reiling 2005; Pearce and Ackley 1995; Sipple 2007; Yarbro and Angevine 1982). However, these items scored consistently lower than the other survey items across both instructors and both language proficiency levels. Surprisingly, almost half of the participants at the low level of language proficiency had slightly more positive perception that they retained audio comments (item #5) better than text based comments than the participants at the high level of language proficiency. Then, the EFL students in the NEST’s group perceived more motivation and retention than those in the NNEST’s group. These findings are consistent with previous studies revealing that EFL students perceived that audio feedback provided by a native speaker of English reinforced their assignment with an authentic listening exercise which created an extra motivating factor (Boswood and Dwyer 1995, 54; Johanson 1999, 33; Hsu, Wang, and Comac 2008, 192). It could be assumed that the participants in the NEST’s group used audio feedback from their instructor who was native speaker of English as an additional instructional tool to develop their listening skills. To summarize, it could be assumed that even the groups (NNEST and NEST) and the levels of language proficiency were not significantly different in their perceptions of the types of feedback; the findings suggest
Conclusions
97
that the level of language proficiency and instructors’ language background can have an impact on the perceived effectiveness and perceptions of the technique during the online course.
Implications and Recommendations The results of the study provide further possible pedagogical implications. Asynchronous embedded audio feedback and text-based feedback provided for EFL students in an online environment have demonstrated their effectiveness to promote EFL students’ higher order thinking. The results of this study found evidence that the EFL students receiving audio feedback and text-based feedback are able to achieve higher levels of critical thinking, i.e., incorporating weekly readings into their online postings, using personal experience, and suggesting possible problem implications. Furthermore, the EFL students had positive feelings of engagement in the course and the instructor’s care when receiving audio feedback compared to text-based feedback. This can imply that audio feedback allows instructors to help the EFL students construct their own solution to problems by interacting with others and learning from them because audio feedback occurs in the form of discussion among learners and through a comparison of internally structured knowledge (Mory 2004, 772). In addition, audio feedback can provide clearer and more personal feedback than text-based feedback. Thus, the finding on perceived feeling of involvement implies that using audio feedback for the EFL students can reinforce the sense of “being there” in order to remove transactional distance when teaching and learning occur in separate locations (Moore 2007, 89). Even though the low level of language proficiency enhanced participation in the asynchronous online discussion, this study revealed surprisingly unexpected results for the EFL students at the low level of language proficiency. In general, the technique was effective for them after the first posttest when the students at the low level of language proficiency (M = 2.40) outperformed the students at the high level of language proficiency (M = 2.34). However, the quality decreased after the second posttest meaning that in the future students at the low level of language proficiency might need more guidance and help from their instructors to achieve a higher level of critical thinking. This implies that the students at the low level of language proficiency may need more individualized feedback, more thorough instruction, and more time to adapt to the technique. At the same time, the EFL students at the low level of language proficiency perceived audio feedback more positively for its
98
Chapter Five
clarity, better retention, personalization, and the instructor’s care compared to text-based feedback. This implies that the technique can assist EFL students at the low levels of language proficiency in their participation in an asynchronous online discussion. Thus, the following pedagogical implications can be used by instructors providing audio feedback in an asynchronous online environment for EFL students. First, audio feedback and text-based feedback provided by the NNEST and NEST can help the EFL students develop higher order thinking when they participate in an online discussion. Therefore, instructors are encouraged to provide both types of feedback to help EFL students develop higher order thinking when they participate in the online discussions. It would be reasonable to start from text-based feedback and give EFL students more time to adapt to audio feedback. Step-by-step implementation of audio feedback could help EFL students at the low level of language proficiency in their learning. Next, when receiving audio feedback, the EFL students can benefit from the NNEST with the same native language, ethnicity, and cultural background to increase the quality of online posting because of the instructor’s familiar accent and the structure of English used (e.g., the structure of English in this study provided by the NNEST was based on English to be taught in Russia). Then, when providing audio feedback for EFL students, the NESTs can help increase the students’ motivation, retention, and perceived feelings of the instructor’s care. The NNESTs and NESTs need to be careful providing audio feedback in the form of discussion to EFL students (e.g., speed of the feedback should be normal, wording should be clear, and avoid unknown vocabulary). This can help EFL students internalize feedback better in order to transfer learning to a higher level of thinking. Finally, the length of audio feedback is important and it should not be long. The audio file size should be small and whether it is possible to use mono recording. This will allow students download the file easily when the Internet connection is slow. It is recommended for instructors who intend to provide audio feedback for EFL students to keep the audio feedback short with direct comments on the major points of the student’s online posting (e.g., how the student can improve the posting providing more examples).
Future Research The results of this study revealed that the level of language proficiency played an important role in helping the EFL students achieve higher-order thinking when they received audio feedback. Therefore, more research
Conclusions
99
needs to be conducted to investigate the relationship between EFL students’ level of language proficiency and the effect of audio feedback because no previous research has been done relating to the language proficiency level and the impact of audio comments. Also, the findings that the EFL students at the low level of language proficiency rated the motivation and involvement items lower but retention item much higher than the students at the high level of language proficiency need further investigation to examine how a student’s level of language proficiency can enhance perceived motivation, retention and involvement when receiving audio feedback in an asynchronous online environment. Further research is needed to investigate other types of feedback (e.g., combined written and audio feedback) to determine the value of text-based feedback and audio feedback for EFL students (Ice et al. 2010, 115). Other types of formats for providing audio feedback are also needed. For example, Camtasia, a capture program, can be used to record the audio feedback because Camtasia allows video recording with typing and cursor movement while recording feedback. In addition, more research is needed to examine the effectiveness of the technique across other disciplines and other categories of EFL students (e.g., graduate students) in order to apply findings to larger populations of EFL students. Finally, further research is needed using instructors with diverse backgrounds, i.e. different native languages and ethnicities to examine whether the results may vary across NNESTs.
Limitations Some limitations affected the outcomes of this study. First, the sample was not random. The participants were chosen from only one program and the lack of participants’ variability can make the application of this research finding in other disciplines and other institutions difficult. Also, the small sample size limited the examination of the interaction effect among the independent variables. Additionally, some participants needed additional time to adapt to the online environment and the threaded discussions in Blackboard, especially those with little previous online learning experience. Then, during out-of-lab time, some participants did not have enough time to read and to reflect on the experimental online postings because of limited computer access and the high cost of using an Internet connection from a cyber-cafe. Finally, the low interaction in the online discussion possibly decreased the level of motivation and feeling of engagement among the participants to carry on discussions during the six weeks.
100
Chapter Five
Conclusion Despite several limitations, this study’s findings have important implication for moving the investigation of audio feedback effectiveness for EFL students forward and identifying best practices in asynchronous online environments. This study suggests the importance of understanding how the EFL students’ level of language proficiency and the instructors’ language background can impact the quality of postings in the online courses. This is especially true when communication conducted in English, specifically when students receive audio comments recorded in English. Nevertheless, this study suggests that embedded audio feedback provided for EFL students can be viewed as an effective technique to enhance higher-order thinking and to increase the perceived effectiveness of the technique in an asynchronous online environment.
REFERENCES
Anderson, Terry, Liam Rourke, D.Randy Garrison, and Walter Archer. “Assessing Teacher Presence in a Computer Conferencing Context.” Journal of Asynchronous Learning Networks 5, no. 2 (2001): 1-17. Anson, Chris M. “In our Own Voices: Using Recorded Commentary to Respond to Writing.” In Writing to Learn: Strategies for Assigning and Responding to Writing across the Disciplines, edited by Mary Deane Sorcinelli, and Peter Elbow, 105-13. San Francisco: Jossey-Bass Publishers, 1997. Arbaugh, J.B., Martha Cleveland-Innes, Sebastian R. Diaz, D. Randy Garrison, Philip Ice, Jennifer C. Richardson, and Karen P. Swan. “Developing a Community of Inquiry Instrument: Testing a Measure of the Community of Inquiry Framework Using a Multi-Institutional Sample.” The Internet and Higher Education 11 (2008): 133-36. Arbaugh, Ben, and Steven Hornik. “Do Chickering and Gamson’s Seven Principles also Apply to Online MBAs?” The Journal of Educators Online 3 (2006): 1-18. Árva, V. and P. Medgyes. “Native and Non-Native Teachers in the Classroom.” System 28 (2000): 355-72. Ary, Donald, Lucy Cheser Jacobs, Chris Sorensen, and Asghar Razavieh. Introduction to Research in Education. California: Thomson Wadsworth, 2006. Ashwell, Tim. “Patterns of Teacher Response to Student Writing in a Multiple-Draft Composition Classroom: Is Content Feedback Followed by Form Feedback the Best Method?” Journal of Second Language Writing 9 (2000): 227-57. Berner, Audrey, William Boswell, and Nancie Kahan. “Using the Tape Recorder to Respond to Student Writing.” In Effective Teaching and Learning of Writing, edited by Gert Rijlaarsdam, Huub van den Bergh, and Michel Couzijn, 339-57. Amsterdam: Amsterdam University Press, 1996. Biesenbach-Lucas, Sigrun. “Asynchronous Discussion Groups in Teacher Training Classes: Perceptions of Native and Non-Native Students.” Journal of Asynchronous Learning Networks 7, no. 3 (2003): 24-46.
102
References
Birch, Dawn, and Michael Volkov. “Assessment of Online Reflections: Engaging English Second Language (ESL) Students.” Australian Journal of Educational Technology 23, no. 3 (2007): 291-306. Bitchener, John. “Measuring the Effectiveness of Written Corrective Feedback: A Response to “Overgeneralization From a Narrow Focus: A Response to Bitchener (2008).” Journal of Second Language Writing 18 (2009): 276-79. Bitchener, John, and Ute Knoch. “The Relative Effectiveness of Different Types of Direct Written Feedback.” System 37, no. 2 (2009a): 322–29. Bitchener, John, and Ute Knoch. “The Value of a Focused Approach to Written Corrective Feedback.” ELT Journal 63, no. 3 (2009b): 204–11. Bitchener, John, and Ute Knoch. “The Value of Written Corrective Feedback for Migrant and International Students.” Language Teaching Research 12 (2008): 409–31. Bitchener, John, Stuart Young, and Denise Cameron. “The Effect of Different Types of Corrective Feedback on ESL Student Writing.” Journal of Second Language Writing 14 (2005): 191-205. Blau, Susan, and John Hall. “Guilt-Free Tutoring: Rethinking How We Tutor Non-Native English Speaking Students.” Writing Center Journal 23, no.1 (2002): 23–44. Black, Alison. “The Use of Asynchronous Discussion: Creating a Text of Talk.” Contemporary Issues in Technology and Teacher Education 5, no. 1 (2005): 5-24. Bonnel, Wanda. “Improving Feedback to Students in Online Courses.” Nursing Education Perspectives 29, no. 5 (2008): 290-94. Boswood, Tim, and Robert Dwyer. “From Marking to Feedback: AudioTaped Response to Student Writing.” TESOL Journal 5, no. 2 (1995): 49-56. Braine, George. Nonnative Educators in English Language Teaching. Mahwah, NJ: Lawrence Erlbaum, 1999. —. “The Nonnative English Speaking Professionals’ Movement and its Research Foundations.” In Learning and Teaching from Experience: Perspectives on Nonnative English-Speaking Professionals, edited by Lía D. Kamhi-Stein, 9-24. Michigan: The University of Michigan Press, 2004. Butler, Yuki G. “How are Nonnative-English-Speaking Teachers Perceived by Young Learners?” TESOL Quarterly 41 (2007): 731-55. Cargile, Aaron Castelan. “Attitudes towards Chinese-Accented Speech: An Investigation in Two Contexts.” Journal of Language and Social Psychology 16 (1997): 434-43.
Feedback in Online Course for Non-Native English-Speaking Students
103
Carless, David. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31 (2006): 219-33. Carson, David L., and John B. McTasney. “Grading Technical Reports with the Cassette Tape Recorder: The Results of a Test Program at the United States Air Force Academy.” In Directions in Technical Writing and Communication, edited by Jay Reid Gould, 107-120. Farmingdale: Baywood Publishing Co, 1978. Chandler, Jean. “The Efficacy of Various Kinds of Error Feedback for Improvement in the Accuracy and Fluency of L2 Student Writing.” Journal of Second Language Writing 12 (2003): 267-96. Chickering, Arthur W., and Zelda F. Gamson. “Seven Principles for Good Practice in Undergraduate Education.” The Wingspread Journal 9, no.2, special insert. Reprinted from AAHE Bulletin 39, no. 7 (1987): 3-7. Chiu, Chi-Yen and Sandra J. Savignon. “Writing to Mean: ComputerMediated Feedback in Online Tutoring of Multidraft Compositions.” CALICO Journal 24, no. 1 (2006): 97-114. Chomsky, Noam. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press, 1965. Cifuentes, Lauren, and Yu-Chih Doris Shih. “Teaching and Learning Online: A Collaboration between United States and Taiwanese Students.” Journal of Research on Technology in Education 33, no. 4 (2001): 456-74. Clark, Irene Lurkis. “Audiotapes and the Basic Writer: A Selected Survey of Useful Materials.” Teaching English in the Two-Year College 12 (1985): 120-29. Clark, Thomas David. “Cassette Tapes: An Answer to the Grading Dilemma.” The American Business Communication Association Bulletin 44, no. 2 (1981): 40-41. Cohen, Andrew D., and Margaret Robbins. “Toward Assessing Interlanguage Performance: The Relationship between Selected Errors, Learners’ Characteristics, and Learners’ Explanations.” Language Learning 26 (1976): 45-66. Coleman, Virginia Brown. “A Comparison between the Relative Effectiveness of Marginal-Interlinear-Terminal Commentary and of Audio-Taped Commentary in Responding to English Composition.” PhD diss., University of Pittsburgh, 1972. Connor, Ulla, and Karen Asenavage. “Peer Response Groups in ESL Writing Classes: How Much Impact on Revision?” Journal of Second Language Writing 3, no.3 (1994): 257-76.
104
References
Cook, Vivian. “Going Beyond the Native Speaker in Language Teaching.” TESOL Quarterly 33, no. 2 (1999): 185-209. Crone-Todd, Darlene E., and Joseph J. Pear. “Application of Blooms Taxonomy to PSI.” The Behavior Analyst Today 2, no. 3 (2001): 20410. Crone-Todd, Darlene E., Joseph J. Pear, and Cynthia N. Read. “Operational Definitions of Higher-Order Thinking Objectives at the Post-Secondary Level.” Academic Exchange Quarterly 4, no.3 (2000): 99–106. Cryer, Patricia, and Nemeta Kaikumba. “Audio-Cassette Tape as a Means of Giving Feedback on Written Work.” Assessment and Evaluation in Higher Education 12 (1987): 148-53. Crystal, David. English as a Global Language. Cambridge: Cambridge University Press, 2003. Cuthrell, Kristen, Elizabeth A. Fogarty, and Patricia J. Anderson. “Is this Thing on?’ University Student Preferences Regarding Audio Feedback.” In Proceedings of Society for Information Technology & Teacher Education International Conference 2009, edited by Ian Gibson, Roberta Weber, Karen McFerrin, Roget Carlsen, and Dee Anna Willis, 32-35. Chesapeake, VA: AACE, 2009. de Oliveira, Luciana, and Sally Richardson. “Collaboration between Native and Nonnative English-Speaking Educators.” In Learning and Teaching from Experience, edited by Lía D. Kamhi-Stein, 294-306. Michigan: The University of Michigan Press, 2004. —. “Strategies for Nonnative-English-Speaking Teachers’ Continued Development as Professionals.” TESOL Journal 2, no. 2 (2011): 22938. doi: 10.5054/tj.2011.251476. Driscoll, Marcy P. Psychology of Learning for Instruction. Toronto: Allyn and Bacon, 1999. Ellis, Rod. “A Typology of Written Corrective Feedback Types.” ELT Journal 63 (2009): 97–107. Ellis, Rod, Younghee Sheen, Mihoko Murakami, and Hide Takashima. “The Effects of Focused and Unfocused Written Corrective Feedback in an English as a Foreign Language Context.” System 36 (2008): 35371. Ertmer, Peggy A., Jennifer C. Richardson, Brian Belland, Denise Camin, Patrick Connolly, Glen Coulthard, Kimfong Lei, and Christopher Mong. “Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study.” Journal of ComputerMediated Communication 12 (2007): 412-33.
Feedback in Online Course for Non-Native English-Speaking Students
105
Ertmer, Peggy A., and Donald A. Stepich. “Examining the Relationship between Higher-Order Learning and Students' Perceived Sense of Community in an Online Learning Environment.” Paper presented at the 10th Australian World Wide Web conference, Gold Coast, Australia, July 3-7, 2004. Farnsworth, Maryruth Bracy. “The Cassette Tape Recorder: A Bonus or a Bother in ESL Composition Correction.” TESOL Quarterly 8, no. 3 (1974): 285-91. Fathman, Ann K., and Eizabeth Whalley. “Teacher Response to Student Writing: Focus on Form versus Content.” In Second Language Writing: Research Insights for the Classroom, edited by Barbara Kroll, 178-90. Cambridge: Cambridge University Press. 1990. Ferris, Dana R. Treatment of Error in Second Language Student Writing. Ann Arbor, MI: University of Michigan Press, 2002. —. “Does Error Feedback Help Student Writers? New Evidence on the Short- and Long-Term Effects of Written Error Correction.” In Feedback in Second Language Writing: Contexts and Issues,edited by Ken Hyland, and Fiona Hyland, 81-104. New York, NY: Cambridge University Press, 2006. —. Response to Student Writing. Mahwah, NJ: Lawrence Erlbaum, 2003. Ferris, Dana R., and John Hedgcock. Teaching ESL Composition: Purpose, Process and Practice. Mahwah, NJ: Lawrence Erlbaum Associates, 1998. Ferris, Dana R., and Barrie Roberts. “Error Feedback in L2 Writing Classes: How Explicit Does It Need to Be?” Journal of Second Language Writing 10 (2001): 161-84. France, Derek, and Anne Wheeler. “Reflections on Using Podcasting for Student Feedback.” Planet 18 (2007). http://www.gees.ac.uk/planet/p18/df2.pdf Gascoigne, Carolyn. “Examining the Effect of Feedback in Beginning L2 Composition.” Foreign Language Annals 37, no.1 (2004): 71-76. Gibbs, Graham, and Claire Simpson. “Conditions Under Which Assessment Supports Students’ Learning.” Learning and Teaching in Higher Education 1 (2004-05): 3-31. Guénette, Danielle. “Is Feedback Pedagogically Correct? Research Design Issues in Studies of Feedback on Writing.” Journal of Second Language Writing 16 (2007): 40-53. Gunawardena, Charlotte N., and Deborah LaPointe. “Cultural Dynamics and Online Learning.” In Handbook of Distance Education, edited by Michael Grahame Moore, 593-607. Mahwah, NJ: Lawrence Erlbaum, 2007.
106
References
Gunawardena, Charlotte Nirmalani, and Marina Stock McIsaac. “Distance Education.” In Handbook of Research on Educational Communications and Technology, edited by David H. Jonassen, 355-95. Mahwah, NJ: Lawrence Erlbaum, 2004. Gunawardena, Charlotte N., Ana C. Nolla, Penne L. Wilson, José R. Lopez-Islas, Noemi Ramírez-Angel, and Rosa M. Megchun-Alpízar. “A Cross-Cultural Study of Group Process and Development in Online Conferences.” Distance Education 22, no. 1 (2001): 85-121. Harris, John S. “The Use of the Tape Recorder in Grading.” Brigham Young University Mediated Learning Systems MLS Newsletter 2, no. 3 (1970): 1-4. Hattie, John, and Helen Timperley. “The Power of Feedback.” Review of Educational Research 77, no. 1 (2007): 81-112. Hays, Janice. “Play it Again, Sandra: The Use of Tape Cassettes to Evaluate Student Compositions.” Paper presented at the annual meeting of the Conference on College Composition and Communication, Denver, Colorado, March 30-April 1, 1978. Higgins, Richard. “Be More Critical’: Rethinking Assessment Feedback.” Paper presented at the British Educational Research Association Conference, Cardiff University, September 7-10, 2000. Hill, Denise. “The Use of Podcasts in the Delivery of Feedback to Dissertation Students. Accessed October 1, 2012. http://www.heacademy.ac.uk/assets/hlst/documents/case_studies/case1 23_apr08_podcasts_feedback_dissertation_students.pdf. Hsu, Hui-Yin, Shiang-Kwei Wang, and Linda Comac. “Using Audioblogs to Assist English-Language Learning: An Investigation into Student Perception.” Computer Assisted Language Learning 21, no. 2 (2008): 181-98. Huang, Su-yueh. “A Quantitative Analysis of Audiotaped and Written Feedback Produced for Students Writing and Student’ Perceptions of the Two Feedback Methods.” Tunghai Journal 41 (2000): 199-232. Hunt, Alan J. “Taped Comments and Student Writing.” Teaching English in the Two-Year College 16, no.4 (1989): 269-73. Hunt, Russell A. “Technological Gift-Horse: Some Reflections on the Teeth of Cassette-Marking.” College English 36 (1975): 581-85. Hurst, C.J. “Cassette Grading Improves Student Report Writing.” Engineering Education 65(1975): 429-30. Hyland, Ken. “Providing Productive Feedback.” ELT Journal 44, no.4 (1990): 279-85.
Feedback in Online Course for Non-Native English-Speaking Students
107
Hyland, Ken, and Fiona Hyland, eds. Feedback in Second Language Writing: Contexts and Issues. New York, NY: Cambridge University Press, 2006. Ice, Phil. “The Impact of Asynchronous Audio Feedback on Teaching, Social and Cognitive Presence.” Paper presented at the First International Conference of the Canadian Network for Innovation in Education, Banff, Alberta, Canada, April 27-30, 2008. Ice, Phil, and Jennifer Richardson. “Optimizing Feedback in Online Courses: An Overview of Strategies and Research.”Paper presented at the 5th International Scientific Conference on E-Learning and Software for Education (eLSE), Bucharest, Romania, April 9-10, 2009. Ice, Philip, Reagan Curtis, Perry Phillips, and John Wells. “Using Asynchronous Audio Feedback to Enhance Teaching Presence and Students’ Sense of Community.” Journal of Asynchronous Learning Networks 11, no.2 (2007): 3-25. Ice, Phil, Karen Swan, Sebastian Diaz, Lori Kupczynski, and Allison Swan-Dagen. “An Analysis of Students’ Perceptions of the Value and Efficacy of Instructors’ Auditory and Text-Based Feedback Modalities Across Multiple Conceptual Levels.” Journal of Educational Computing Research 43, no 1 (2010): 113-34. Ice, Phil, Karen Swan, Lori Kupczynski, and Jennifer C. Richardson. “The Impact of Asynchronous Audio Feedback on Teaching and Social Presence: A Survey of Current Research.” In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008, edited by J. Luca and E. Weippl, 5646-649. Chesapeake, VA: AACE, 2008. Jelfs, Anne, and Denise Whitelock. “The Notion of Presence in Virtual Environments: What Makes the Environment “Real.” British Journal of Educational Technology 31, no. 2 (2000): 145-53. Johanson, Robert. “Rethinking the Red Ink: Audio-Feedback in the ESL Writing Classroom.” Texas Papers in Foreign Language Education 4, no. 1 (1999): 31-38. Johnson, R. (Robert) Burke, and Larry B. Christensen. Educational Research: Quantitative, Qualitative, and Mixed Approaches. Thousand Oaks, CA: Sage, 2008. Jonassen, David H. “Objectivism versus Constructivism: Do We Need a New Philosophical Paradigm?” Educational Technology Research and Development 39, no.3 (1991): 5-14. —. “Thinking Technology: Toward a Constructivist Design Model.” Educational Technology 34, no. 4 (1994): 34-37.
108
References
Kahrs, Karol Anne. “Cassette Tapes: A Medium for Personal Feedback and Learning.” The Physical Educator 31(1974): 159-61. Keller, Elizabeth. “Audio-Taped Critiques of Written Work.” The Second Draft: Bulletin of the Legal Writing Institute 14, no.1 (1999): 13-14. Kelly, Patrick, and Steve Ryan. “Using Tutor Tapes to Support the Distance Learner.” International Council for Distance Education Bulletin 3 (1983): 1-18. Kirschner, Paul A., Henk van den Brink, and Marthie Meester. “Audiotape Feedback for Essays in Distance Education.” Innovative Higher Education 15, no. 2 (1991): 185-95. Klammer, Enno. “Cassettes in the Classroom.” College English 35 (1973): 179-89. Klose, Robert. “When the Red Pen Fails, Try Sending the Message on Tape.” Christian Science Monitor 91, no. 144 (1999): 14. Lalande, John F. II. “Reducing Composition Errors: An Experiment.” Modern Language Journal 66 (1982): 140-49. Lea, Mary R., and Brian V. Street. “Student Writing in Higher Education: An Academic Literacies Approach.” Studies in Higher Education 23, no. 2 (1998): 157-72. Li, Hong, and Qingying Lin. “The Role of Revision and Teacher Feedback in a Chinese College Context.” Asian EFL Journal 9, no. 4 (2007): 230-39. Ling, Cheung Yin and George Braine. “The Attitudes of University Students Towards Non-Native Speakers English Teachers in Hong Kong.” RELC Journal 38, no. 3 (2007): 257-77. Liu, Jun. “Nonnative English- Speaking Professionals in TESOL.” TESOL Quarterly 33, no.1 (1999): 85-102. —. “Chinese Graduate Teaching Assistants Teaching Freshman Composition to Native English-Speaking Students.” In Perceptions, Challenges and Contributions to the Profession, edited by Enric Llurda, 155-77. Springer, 2005. —. “Complexities and Challenges in Training Nonnative EnglishSpeaking Teachers: State of the Art.” CamTESOL Conference on English Language Teaching: Selected Papers 5 (2009): 1-8. Logan, Henrietta L., Nelson S. Logan, James L. Fuller, and Gerald E. Denehy. “The Role of Audiotape Cassettes in Providing Student Feedback.” Educational Technology 16, no.12 (1976): 38-39. Lumsden, Robert. “Evanston, Illinois, Township High School Adds to its Program: The Use of Dictation Machines in Grading English Themes.” The Bulletin of the National Association of Secondary School Principals (1962): 223-26.
Feedback in Online Course for Non-Native English-Speaking Students
109
Lynch, Tony, and Joan Maclean. “Effects of Feedback on Performance: A Study of Advanced Learners on an ESP Speaking Course.” Edinburgh Working Papers in Applied Linguistics 12 (2003): 19-44. Mahboob, Ahmar. “Status of Nonnative English-Speaking Teachers in the United States.” PhD diss., Indiana University, 2003. McGarrell, Hedy, and Jeff Verbeem. “Motivating Revision of Drafts Through Formative Feedback.” ELT Journal 61, no. 3 (2007): 228-36. McGrew, Jean B. “An Experiment to Assess the Effectiveness of the Dictation Machine as an Aid to Teachers in the Evaluations and Improvement of Student Compositions. Final Report.” 1969. ERIC Document Reproductive Service No. (ED 034776). Medgyes, Péter. “Native or Non-Native: Who’s Worth More?” ELT Journal 46, no. 4 (1992): 340-49. Mellen, Cheryl, and Jeff Sommers. “Audiotaped Responses and the 2Year-Campus Writing Classroom: The Two-Sided Desk, the ‘Guy with the Ax,’ and the Chirping Birds.” Teaching English in the Two-Year College 31, no.1 (2003): 25-39. Merry, Stephen, and Paul Orsmond. “Students' Responses to Academic Feedback Provided via mp3 Audio Files.” In Proceedings of the Science Teaching and Learning Conference 2007, edited by Paul Chin, Katherine Clark, Susan Doyle, Peter Goodhew, Tracey Madden, S. Meskin, Tina Overton, and Jackie Wilson, 100-104. The Higher Education Academy: York, 2007. Micklewright, Dominic. “Podcasting as an Alternative Mode of Assessment Feedback.” (2008). Accessed October 1, 2012. http://www.heacademy.ac.uk/assets/hlst/documents/case_studies/case1 29_-podcast_feedback.pdf. Miller, David C. “The Audio Tape Cassette in Education.” Engineering Education 63, no. 6 (1973): 413-40. Mory, Edna Holland. “A New Perspective on Instructional Feedback: From Objectivism to Constructivism.” Paper presented at the annual meeting of the Association for Educational Communications and Technology, Anaheim, California, February 8-12, 1995. Mory, Edna Holland. “Feedback Research Revisited.” In Handbook of Research on Educational Communications and Technology, edited by David H. Jonassen, 745-83. Mahwah, NJ: Lawrence Erlbaum, 2004. Morra, Anna María, and María Inés Asís. “The Effect of Audio and Written Teacher Responses on EFL Student Revision.” Journal of College Reading and Learning 39, no. 2 (2009): 68-81.
110
References
Moore, Gary E. “Providing Instructional Feedback to Students in Education Classes.” 1977. ERIC Document Reproduction Service No. (ED 173309). Moore, Michael Grahame. “The Theory of Transactional Distance.” In Handbook of Distance Education, edited by Michael Grahame Moore, 89-105. Mahwah, NJ: Lawrence Erlbaum, 2007. Moxley, Joseph M. “Responding to Student Writing: Goals, Methods, Alternatives.” Freshman English News (1989): 3-11. Moussu, Lucie, and Enric Llurda. “Non-Native English-Speaking English Language Teachers: History and Research.” Language Teaching 41 (2008): 315-48. Nakamaru, Sarah. “A Lot of Talk about Writing: Oral Feedback on International and US-Educated Multilingual Writers’ Texts.” PhD diss., New York University, 2008. Nicol, David. “Increasing Success in First Year Courses: Assessment ReDesign, Self-Regulation and Learning Technologies.” In Proceedings of the 23rd Annual Ascilite Conference: Who’s learning? Whose technology? Edited by Lina Markauskaite, Peter Goodyear, and Peter Reimann, 589-98. The University of Sydney, 2006. Nicol, David J., and Debra Macfarlane-Dick. “Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31, no. 2 (2006): 199-216. Nortcliffe, Anne, and Andrew Middleton. “A Three Year Case Study of Using Audio to Blend the Engineers Learning Environment.” Engineering Education: Journal of the Higher Education Academy Engineering Subject Centre 3, no. 2 (2008): 45-57. Olesova, Larisa A., Jennifer C. Richardson, Donald Weasenforth, and Christine Meloni. “Using Asynchronous Instructional Audio Feedback in Online Environments: A Mixed Methods Study.” Journal of Online Learning and Teaching 7, no. 1 (2011a): 30-42. Olesova, Larisa, Dazhi Yang, and Jennifer C. Richardson. “Cross-Cultural Differences in Undergraduate Students’ Perceptions of Online Barriers.” Journal of Asynchronous Learning Networks 15, no. 3 (2011b): 68-80. Olsen, Gary A. “Beyond Evaluation: The Recorded Response to Essays.” Teaching English in the Two-Year College 8, no. 2 (1982): 121-23. Oomen-Early, Jody, Mary Bold, Kristin L.Wiginton, Tara L. Gallien, and Nancy Anderson. “Using Asynchronous Audio Communication (AAC) in the Online Classroom: A Comparative Study.” Journal of Online Learning and Teaching 4, no.3 (2008): 267-76.
Feedback in Online Course for Non-Native English-Speaking Students
111
Orsmond, Paul, Stephen Merry, and Kevin Reiling. “Biology Students’ Utilization of Tutors’ Formative Feedback: A Qualitative Interview Study.” Assessment and Evaluation in Higher Education 30, no.4 (2005): 369-86. Pasternak, Mindy, and Kathleen M. Bailey. “Preparing Nonnative and Native English-Speaking Teachers: Issues of Professionalism and Proficiency.” In Learning and Teaching from Experience: Perspectives on Nonnative English-Speaking Professionals, edited by Lía D. Kamhi-Stein, 155-75. Ann Arbor: University of Michigan Press, 2004. Patrie, James. “The Use of the Tape Recorder in an ESL Composition Programme.” TESL Canada Journal 6, no.2 (1989): 87-89. Pear, Joseph J., and Darlene E. Crone-Todd. “A Social Constructivist Approach to Computer-Mediated Instruction.” Computers and Education 38 (2002): 221-31. Pearce, C. Glenn, and R. Jon Ackley. “Audiotaped Feedback in Business Writing: An Exploratory Study.” Business Communication Quarterly 58, no. 3 (1995): 31-34. Petite, Joseph. “Tape Recorders and Tutoring.” Teaching English in the Two-Year College 9 (1983): 123-25. Phillipson, Robert. Linguistic Imperialism. Oxford: Oxford University Press, 1992. Poulos, Ann, and Mary Jane Mahony. “Effectiveness of Feedback: The Students’ Perspective.” Assessment and Evaluation in Higher Education 33, no. 2 (2008): 143-54. Price, Carol, and Linda Holman. “Coaching Writing in Multicultural Classrooms with Oral Commentary.” 1996 ERIC Document Reproduction Service No. (ED 402578). Quinton, Sarah, and Teresa Smallbone. “Feeding Forward: Using Feedback to Promote Student Reflection and Learning – a Teaching Model.” Innovations in Education and Teaching International 47, no.1 (2010): 125-35. Rahimi, Mohammad. “The Role of Teacher’s Corrective Feedback in Improving Iranian EFL Learners’ Writing Accuracy over Time: Is Learner’s Mother Tongue Relevant?” Reading and Writing 22 (2009): 219–43. Robb, Thomas, Steven Ross, and Ian Shortreed. “Salience of Feedback on Error and its Effect on EFL Writing Quality.” TESOL Quarterly 20, no.1 (1986): 83-95. Roberts, S. J. “Podcasting Feedback to Students: Students’ Perceptions of Effectiveness.” (2008). Accessed October 1, 2012.
112
References
http://www.heacademy.ac.uk/assets/hlst/documents/case_studies/case1 25_podcasting_feedback.pdf Rodway-Dyer, Sue, Elizabeth Dunne, and Matthew Newcombe. “Audio and Screen Visual Feedback to Support Student Learning.” In “In Dreams Begins Responsibility” – Choice, Evidence and Change. The 16th Association for Learning Technology Conference (ALT-C 2009). Held 8-10 September 2009, edited by H. Damis, and L. Creanor, 6169. University of Manchester, England, UK. Romiszowski, Alexander, and Robin Mason. “Computer-Mediated Communication.” In Handbook of Research on Educational Communications and Technology, edited David H. Jonassen, 397-431. Mahwah, NJ: Lawrence Erlbaum, 2004. Rossman, Mark H. “Successful Online Teaching Using an Asynchronous Learner Discussion Forum.” Journal of Asynchronous Learning Networks 3, no.2 (1999): 91-97. Rotheram, Bob. “Using an MP3 Recorder to Give Feedback on Student Assignments.” Educational Developments 8, no. 2 (2007): 7-10. Rubens, Philip M. “Oral Grading Techniques: An Interactive System for the Technical Writing Classroom.” Technical Writing Teacher 10 (1982): 41-44. Saito, Hiroko. Teachers’ Practices and Students’ Preferences for Feedback on Second Language Writing: A Case Study of Adult ESL Learners.” TESL Canada Journal/Revue TESL du Canada 11, no. 2 (1994): 4668. Sauro, Shannon. “Computer-Mediated Corrective Feedback and the Development of L2 Grammar.” Language Learning and Technology 13, no.1 (2009): 96-120. Schachter, Jacquelyn. “Nutritional Needs of Language Learners.” In On TESOL ’82: Pacific Perspectives on Language Learning and Teaching, edited by Mark A. Clarke, and Jean Handscombe, 175-89. Washington, DC: TESOL, 1983. Schwartz, Fred, and Ken White. “Making Sense of it All: Giving and Getting Online Course Feedback.” In The Online Teaching Guide: A Handbook of Attitudes, Strategies, and Techniques for the Virtual Classroom, edited by Ken W. White, and Bob H. Weight, 167-82. Boston: Allyn and Bacon, 2000. Semke, Harriet D. “Effects of the Red Pen.” Foreign Language Annals 17 (1984): 195–202. Sheen, Younghee. “The Effect of Focused Written Corrective Feedback and Language Aptitude on ESL Learners’ Acquisition of Articles.” TESOL Quarterly 41, no. 2 (2007): 255-83.
Feedback in Online Course for Non-Native English-Speaking Students
113
Sheen, Younghee, David Wright, and Anna Moldawa. “Differential Effects of Focused and Unfocused Written Correction on the Accurate Use of Grammatical Forms by Adult ESL Learners.” System 37, no. 4 (2009): 556–69. Shih, Yu-Chih Doris, and Lauren Cifuentes. “Taiwanese Intercultural Phenomena and Issues in a United-States-Taiwan Telecommunications Partnership.” Educational Technology, Research and Development 51, no.3 (2003): 82-102. Sipple, Susan. “Ideas in Practice: Developmental Writers’ Attitudes Toward Audio and Written Feedback.” Journal of Developmental Education 30, no.3 (2007): 22-31. Sommers, Jeffrey. “The Effects of Tape-Recorded Commentary on Student Revision: A Case Study.” Journal of Teaching Writing 8 (1989): 49-75. —. “Spoken Response: Space, Time, and Movies of the Mind.” In Writing with Elbow, edited by Pat Belanoff, Marcia Dickson, Sheryl I. Fontaine, and Charles Moran, 172-86. Logan UT: Utah State University Press, 2002. Stern, Lesa A., and Amanda Solomon. “Effective Faculty Feedback: The Road Less Traveled.” Assessing Writing 11, no.1 (2006): 22-41. Stratton, Charles R. “The Electric Report Card: A Follow-Up on Cassette Grading.” Journal Tech Writing and Communication 5, no.1 (1975): 17-22. Straub, Richard, and Ronald F. Lunsford. Twelve Readers Reading: Responding to College Student Writing. Cresskill, NJ: Hampton Press, 1995. Swan, Karen. “Learning Effectiveness: What the Research Tells Us.” In Elements of Quality Online Education: Practice and Direction, directed by John Bourne, and Janet C. Moore, 13-45. Needham, MA: Sloan Consortium, 2003. Syncox, David. “The Effects of Audio-Taped Feedback on ESL Graduate Student Writing.” Master thesis, McGill University, 2003. ProQuest (AAT EC53307). Takemoto, Patricia A. “Exploring the Educational Potential of Audio.” New Directions for Adult and Continuing Education 34 (1987): 19–28. Tanner, Bernard. “Teacher to Disc to Student.” The English Journal 53, no. 5 (1964): 362-63. Thonus, Terese. “What are the Differences? Tutor Interactions with Firstand Second-Language Writers.” Journal of Second Language Writing 13 (2004): 227-242.
114
References
Truscott, John. “The Case Against Grammar Correction in L2 Writing Class.” Language Learning 46, no. 2 (1996): 327-69. —. “What’s Wrong with Oral Grammar Correction.” Canadian Modern Language Review 55, no. 4 (1999): 437-56. —. “The Effect of Error Correction on Learners’ Ability to Write Accurately.” Journal of Second Language Writing 16 (2007): 255-72. Truscott, John, and Angela Yi-ping Hsu. “Error Correction, Revision, and Learning.” Journal of Second Language Writing 17 (2008): 292–305. Tsui, Amy B.M., and Maria Ng. “Do Secondary L2 Writers Benefit from Peer Comments?” Journal of Second Language Writing 9, no. 2 (2000): 147-70. Tsutsui, Michio. “Multimedia as a Means to Enhance Feedback.” Computer Assisted Language Learning 17, no. 3-4 (2004): 377-402. Tuzi, Frank. “The Impact of E-Feedback on the Revisions of L2 Writers in an Academic Writing Course.” Computers and Composition 21 (2004): 217-35. Vasilyeva Ekaterina, Seppo Puuronen, Mykola Pechenizkiy, and Pekka Räsänen. “Feedback Adaptation in Web-Based Learning Systems.” International Journal of Continuing Engineering Education and LifeLong Learning 17, no.4/5 (2007): 337-57. Vogler, Stephen H. “Grading Themes: A New Approach, a New Dimension.” English Journal (1971): 70-74. Wang, Haidong. “Teaching Asian Students Online: What Matters and Why?” PAACE Journal of Lifelong Learning 15 (2006): 69-84. Warschauer, Mark. “Computer-Mediated Collaborative Learning: Theory and Practice.” The Modern Language Journal 81, no. 4 (1997): 47081. Weasenforth, Donald, Sigrun Biesenbach-Lucas, and Christine Meloni. “Realizing Constructivist Objectives through Collaborative Technologies: Threaded Discussions.” Language Learning and Technology 6, no. 3 (2002): 58-86. Yarbro, Richard, and Betty Angevine. “A Comparison of Traditional and Cassette Tape English Composition Grading Methods.” Research in the Teaching of English 16 (1982): 394-96. Yoshida, Reiko. “Learners’ Perception of Corrective Feedback in Pair Work.” Foreign Language Annals 41, no. 3 (2008): 525-41. Zak, Frances. “Between the Red Pencil and the Smiley Face: More Ways to Respond to Student Writing.” Paper presented at the Conference on College Composition and Communication, St. Louis, Missouri, March 18, 1988.
Feedback in Online Course for Non-Native English-Speaking Students
115
—. “Exclusively Positive Responses to Student Writing.” Journal of Basic Writing 9, no.2 (1990): 40-53. Zamel, Vivian. “Responding to Student Writing.” TESOL Quarterly 19, no. 1 (1985): 79-101. Zhang, Shuqiang. “Reexamining the Affective Advantage of Peer Feedback in the ESL Writing Class.” Journal of Second Language Writing 4, no.3 (1995): 209-22. Zhang, Zuochen, and Richard Kenny. “Learning in an Online Distance Education Course: Experiences of Three International Students.” The International Review of Research in Open and Distance Learning 11, no.1 (2010): 17-36. Zhao, Naxin, and Douglas McDougall. “Cultural Influences on Chinese Students’ Asynchronous Online Learning in a Canadian University.” The Journal of Distance Education 22, no. 2 (2008): 59-80.
APPENDIX A A DEMOGRAPHIC SURVEY AND THE AUDIO FEEDBACK SURVEY TO EXAMINE STUDENTS’ RESPONSES TO AUDIO AND TEXT-BASED FEEDBACK (ICE 2008)
The goal of this study is to determine what types of feedback is the most effective in online course. The information generated by the study will be used for research purposes. As such the findings of the study will be published and / or presented at conferences. Before you begin the survey, please be aware of the following: Your participation is entirely voluntary. You may choose to discontinue the survey at any time and/or choose not to answer certain questions. Your responses will remain anonymous and the course instructor cannot determine which survey you completed. Complete confidentiality will be maintained. At no time will your identity be revealed either by the procedures of the study or during reporting of the results. No negative consequence will result if you chose not to participate. Please respond to the questions below; this will be the most helpful in trying to find out how to improve things for students and faculty members in the future. Thank you for your participation in this research. First Name:
Last Name:
Age___ Gender____ How many years have you studied English as a Foreign Language including school?
118
Appendix A
How many online courses have you previously taken in which the instructor provided audio feedback? 0 1 2 3 or more Do you speak Sakha (other native languages except Russian)? Do you speak Russian? Except English, what other foreign language(s) have you studied? Have you studied abroad? Have you been in Yakutsk during experimental study? Your ethnical group: For the following survey please choose the answer that best fits your opinion about the types of feedback. Thank you. (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree) When using audio feedback, inflection in the instructor’s voice made his / her intent clear. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree The instructor’s intent was clearer when using audio than text. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree Audio comments made me feel more involved in the course than text based comments. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree
Feedback in Online Course for Non-Native English-Speaking Students
119
Audio comments motivated me more than text based comments. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree I retained audio comments better than text based comments. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree Audio comments are more personal than text based comments. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree Receiving audio comments made me feel as if the instructor cared more about me and my work than when I received text based comments. 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree
APPENDIX B INFORMED CONSENT FORM
Research Project Number 0910008541 RESEARCH PARTICIPANT CONSENT FORM Effectiveness of Embedded Audio Feedback in English as a Foreign Language Classes Jennifer Richardson, Associate Professor, Purdue University, Learning Design and Technology Program Purpose of Research This research study will examine what types of feedback is the most effective in online “An introduction to Business Studies” to gain insights into how English as a Foreign Language (EFL) students perceive audio feedback and text-based feedback, the pre and post-course survey to be used to collect information for this research study as well as grading scores on course participation. Specific Procedures If you agree to enter this study, you will participate in online discussions within your group from International Relation Program. You will reflect on the instructional weekly questions individually and respond to your group mates’ weekly postings. You will receive two different types of feedback (audio or text) from your online instructors in Russian and English at the end of each online discussion. Your weekly online postings and participation will be graded according to the course rubric every week. At the beginning and at the end of the online course, you will be asked to answer the online survey questions. Duration of Participation You will have six online discussions during eight weeks. Each online discussion will be held among group members during one week and it will be finalized with the individual weekly reflection on the weekly reading’s question (s) and responses to the group members’ postings.
122
Appendix B
Risks The risks involved by participating in this research study are minimal and no greater than what you could expect to encounter in everyday life. Also, breach of confidentiality is a risk common to this type of research. Safeguards to minimize these risks are discussed in the “Confidentiality” section of this form. Benefits There may be no direct benefit for you in the study. But there may be broad benefits in terms of successful instructional strategies (audio feedback) that engage you in online course. If this research is successful in proving effectiveness of providing audio-feedback strategy and tool, it may help online instructors effectively engage you and other students in online course. In addition, the study result may also provide knowledge about the design of online course tools. Voluntary Nature of Participation You do not have to participate in this research project. If you agree to participate you can withdraw your participation at any time without penalty. Your participation in this study will not affect your grade in the course. Confidentiality Your real name will not be used at any point of information collection or in the written report; as an alternative, you and any other person involved in your case will be given pseudonyms that will be used in all verbal and written records and reports. Information obtained about you for this study will be kept private to the extent allowed by law. However, the following groups will be able to view your report and have access to private information that identifies your name: key researchers and personnel. Safeguards to minimize the risk are discussed in the Confidentiality section of this form. The project’s research records may be reviewed by departments at Purdue University responsible for regulatory and research oversight. Compensation: If you complete the pre and post course survey you will receive extra credit which will not exceed 3% of the overall course grade from the course instructors, who will receive your name after all the other course assignments, have been graded, but before final grades are submitted.
Feedback in Online Course for Non-Native English-Speaking Students
123
Contact Information: If you have any questions about this research project, you can contact 1. Jennifer Richardson, (765) 494-5671 2. Larissa Olesova, (765) 496 -3020 If you have concerns about the treatment of research participants, you can contact the Institutional Review Board at Purdue University, Ernest C. Young Hall, 10th floor – room 1032, 155 S. Grant Street, West Lafayette, IN 47907-2114. The phone number for the Board’s secretary is (765) 4945942. The email address is [email protected].
Documentation of Informed Consent I have had the opportunity to read this consent form and have the research study explained. I have had the opportunity to ask questions about the research project and my questions have been answered. I am prepared to participate in the research project described above. I will receive a copy of this consent form after I sign it. ______________________ Participant’s Signature Date ______________________ Participant’s Name _______________________ Researcher’s Signature Date
APPENDIX C THE ARTICLES AND THE QUESTIONS USED DURING THE EXPERIMENTAL STUDY
Assignment: Please read the text and send your initial posting by Wednesday (Date) (midnight Yakutsk time). Your initial posting (up to 12-15 simple sentences) should reflect only on the questions below. Please do not include unnecessary sentences. Please keep your posting brief but up-to-point. Then, you need to respond to one-two students by Friday (Date) (midnight Yakutsk time). Your response (up to 5-7 simple sentences) to one-two students also should be brief and up-to-point. On Friday, you will receive text-based comments or audio comments from your instructor. Please read or listen to them carefully. Please open and save the PDF file with audio feedback on your flash drive. You may need to listen to the comments while you will be working on the assignment. You don’t need to respond during the week but your task is to follow instructional recommendation and suggestions to improve your next participation for the following week. You will receive 1-4 points for your participation to each week (see participation rubric). *When you send your initial posting, please, give the title “Initial Posting from Your Name” and when you respond to one-two students, please, give the title “To Student’s Name.” Text One (March 7-11, 2011): What’s your line in the sand? (http://blogs.wsj.com/juggle/2011/02/17/why-i-turned-down-a-dream-job/) Late last year I received a job offer that I really wanted. It was perfect for me in many ways: It was a step up in my career, offered the right kind of challenges, and it would tap into my strengths, yet give me a chance to grow, too. And it was more financially rewarding, to boot. But after mulling the position for almost two weeks–a generous amount of time to decide–I turned down it down. Accepting would have meant crossing a line in the sand I didn’t even know I had until I edged right up to it. And it’s the first time I’ve said no to something I really wanted to do because I just couldn’t figure out how to make it work for my family at this moment in time. Like many working parents, I’ve worried about making all the
126
Appendix C
pieces fit, but have often figured out how. I always operated under the rule that anything could work if you study the pieces enough and consider all available options. I willed myself to believe this to deal with the puzzle that is two kids, a spouse, a dog, a career, babysitters and the rest of it. I’ve done it time and time again. But every rule seems to have an exception and I found mine. Because of the job’s hours and 70-minute commute each way, accepting it would have meant not seeing my children most weeknights for six or seven months. Because my husband would have had to fill in some of the slack, it could mean a potential setback for his career too. (We considered using extra babysitters, but that was a financial stretch and we preferred a parent there in the evenings.) After a few months, the schedule would ease and we could have lessened the commute with a move once the school year ended. At first, I thought I could do anything for six months or so because there would eventually be an end to it. But then I actually missed three bedtimes in a row for various reasons. I saw the impact on my children and it wasn’t pretty. I also felt my longing for them almost as acutely. My own ability to focus at work was hampered by that void left by not having even a small amount of nightly time with my kids. I know plenty of parents who work long hours and have little weekday time with their children as a way of life. But I quickly realized it wasn’t a lifestyle I could adopt for half a year. Babysitters were not me and even my husband couldn’t be my stand-in. I really wanted that job. But I couldn’t make it for six months without seeing my kids during the week. My line in the sand: Somewhere around six weeks. Beyond that, I’d be wreck and so would my children (and possibly my husband and sitters). I won’t deny that there are days when I go over it all again in my head, wondering if there’s something I missed that could have made it work. But I come back to the same answer: Great career opportunities will come around again but all the bedtimes between now and my daughter’s second birthday in June, between now and the end of my son’s first year of kindergarten–those don’t come around again once they’ve passed. After reading the text please answer the following questions: What’s your line in the sand? Have you ever turned down a position or a promotion because it would mean you would cross it? Please reflect by using your own examples and experience to support your arguments/statement. Text Two (March 14-18, 2011): Williams College reversed its needblind policy for foreign students (http://blogs.wsj.com/juggle/2011/02/22/to-get-into-college-it-helps-to-berich/)
Feedback in Online Course for Non-Native English-Speaking Students
127
Paying for college is one of the most popular topics here at the Juggle. But it may be time to forget the conventional wisdom that everyone should apply for financial aid. Instead, if you can afford to skip the aid applications, it may actually boost your child’s chances of getting accepted.In last Saturday’s Weekend Investor, the WSJ’s Jane J. Kim reports that more colleges, including Middlebury, Wake Forest, Williams and Tufts, are either taking applicants’ financial status into account or have been offering slots to wealthier students – especially international or wait-listed applicants – who can afford to pay in full. Some public state universities, meanwhile, are admitting more out-of-state students who pay higher tuition.The bottom line: As schools face greater financial pressures, borderline applicants who can afford to pay more may stand a greater chance of getting in, colleges and admissions experts say. Schools stress that they aren’t lowering their admissions criteria. Still, some colleges say they begin their admissions process as “need blind”—admitting students regardless of their ability to pay—but may start to consider an applicant’s financial status later in the admissions process, especially if the financialaid budget starts to run thin. (Wait-listees or international candidates are often the biggest beneficiaries.) But it isn’t all good news for wealthier applicants. Some top schools, such as Stanford, Yale and Dartmouth, have adjusted their financial aid formulas that may raise costs for families with higher incomes. Yale, for instance, is trimming the aid given to families earning more than $130,000, while Dartmouth is replacing some grants with loans for families making over $75,000. After reading the text please answer the following questions: Is considering an applicant’s financial status a smart admissions policy or a way for the rich to buy their way into college? Is considering an applicant’s financial aid status fair? Would you forgo financial aid (if you’re on the fence) if you think it could boost your child’s admissions chances? Please reflect by using your own examples and experience to support your arguments/statement. Text Three (March 21-25, 2011): A fishnets creation by French designer Jean-Paul Gaultier (http://blogs.wsj.com/juggle/2011/02/03/how-fashion-forward-is-youroffice/) How fashion forward is your office? That question is at the heart of Christina Binkley’s On Style column in this week’s Personal Journal. Binkley writes how this year, brightly-colored or patterned stockings – think purple, lace, punk-shredded or leopard patterned — are all the rage among fashionistas. But is this haute hosiery appropriate for the office?
128
Appendix C
Binkley interviews career and image consultants who advise most workers to err on the conservative side, sticking with nude, beige or black-opaque hosiery (or even go bare in warmer weather) if there’s any doubt at all about what’s appropriate. “There are penalties in everyday work environments. If someone wears the wrong tie, you may not think they’re worthy of working with you on a project,” says image consultant Sarah Whittaker. “You may think it’s just a pair of red tights. But actually, it can make a big difference.” Indeed, one chief investment officer, who says she hires plenty of free spirits, draws the line when it comes to office fashion: “If I walk into a brokerage and they look like Goth girl with multi-striped leggings, I’m not going to feel good leaving my money there,” she says. On the other hand, those working in creative industries – a growing segment of our economy – have long leashes when it comes to style, Binkley writes. After reading the text please answer the following questions: How accepting is your office of more fashion-forward clothes? Is your workplace’s sartorial sense conservative or creative? And, for fun, what’s your take on office legwear? Please reflect by using your own examples and experience to support your arguments/statement. Text Four (March 28-April 1, 2011): Starting salaries of new physicians reveal a growing gender gap (http://blogs.wsj.com/juggle/2011/02/03/the-17000-doctor-pay-gap/) Newly-trained women doctors are being paid significantly lower salaries – about $17,000 less — than their male counterparts, found a new study published in the February issue of Health Affairs. The pay disparity exists even after the researchers accounted for factors such as medical specialty, hours worked and practice type. Women had lower starting salaries than men in nearly all specialties, the researchers found. The gap has been growing steadily in recent decades, to $16,819 in 2008, from just $3,600 in 1999. The pay disparity exists even as women now comprise nearly half of all U.S. medical students. In 1999, new women doctors earned $151,600, on average, compared to $173,400 for men – a 12.5% salary difference. In 2008, that salary difference widened by nearly 17%, with women starting out at $174,000, compared to $209,300 for men. (These are average salary figures, across all specialties.) Anthony Lo Sasso, the lead researcher on the study, and a professor at the School of Public Health at the University of Illinois at Chicago, said the pay gap may exist because women doctors are seeking greater flexibility and family-friendly benefits, such as not being on call after certain hours. Women may be negotiating
Feedback in Online Course for Non-Native English-Speaking Students
129
these work conditions at the same time that they are negotiating their starting salaries. The researchers haven’t ruled out other possible factors, such as an increase in gender discrimination or women being less effective than men at negotiating pay. Lo Sasso added that doctors need to further understand and address this gender gap, and reconsider pay and working arrangement for providers, particularly in primary care. “It is not surprising to say that women physicians make less than male physicians because women traditionally choose lower-paying jobs in primary care fields or they choose to work fewer hours,” said Lo Sasso. “What is surprising is that even when we account for specialty and hours and other factors, we see this growing unexplained gap in starting salary. The same gap exists for women in primary care as it does in specialty fields.” Historically, women have disproportionately flocked to primary-care fields such as internal medicine, family practice or pediatrics. But in recent years, the percentage of women entering primary care fell from nearly 50% in 1999 to just over 30% in 2008. Despite entering higher-paying specialties, the widening pay gap persisted, the researchers found. For instance, female heart surgeons earned $27,103 less, on average, than men, while females specializing in pulmonary disease earned an average $44,320 less than men. The authors studied survey data from doctors exiting training programs in New York state, home to more medical residents than any other state in the country. The survey sampled 4,918 men and 3,315 women. After reading the text please answer the following questions: How would you take on this gender gap? If you are a physician, how else would you see this pay disparity in the field? Do you think it should hurt your salary if you also negotiate for more family-friendly working arrangements? Propose an alternative. Text Five (April 4-8, 2011): Nina Zagat, shown here with husband Tim, finds out about fellow diners’ food and location preferences before choosing a restaurant (http://blogs.wsj.com/juggle/2011/01/27/how-to-navigate-a-businessmeal/) For many of us, wining and dining clients comes with the job, whether over a power lunch or a formal dinner. Such meals always made me feel like a nervous teenager going out on a first date: Will I eat too messily or spill my drink? Will conversation lag? Will this lead to a long-term relationship? In today’s WSJ, restaurant expert Nina Zagat, co-founder of the eponymous guides, provides her take on navigating business meals.
130
Appendix C
Her words of wisdom focus on business dinners, but can just as easily be applied to other meals, too. Some of her chief bits of advice: –Before choosing a restaurant, find out (if you can) about your diners’ food and location preference. Choose a restaurant quiet enough to carry on a conversation without straining. –Start the meal with chit-chat, instead of moving to business topics right away. But don’t wait until the end of the meal, either, to get down to business, as it’s better to end on a relaxed note. –When ordering, try not to draw a lot of attention to yourself, so be discreet about dietary restrictions or allergies. Also, avoid foods that are complicated to eat, like lobster or spaghetti. If you’re not hungry, it’s OK to order half-portions or offer to share appetizers or deserts with your companions. –If you’re the first to finish, don’t let a waiter take the plate until your dining companion is done. And don’t order tea or dessert unless your companion does, too. –Don’t place your cell phone on the table or let it ring. And if you leave the room momentarily, discreetly leave your napkin on the chair, rather than displaying it for your guest to see. After reading the text please answer the following questions: Please share your business meal tips. Any horror stories to share? How often do you have to wine and dine clients or colleagues for your job? Curious: Do you ever have alcoholic beverages with clients, or is that rare? Please reflect by using your own examples and experience to support your arguments/statement. Text Six (April 11-15, 2011): You think negotiating a raise is tough? Try convincing an Afghan elder to identify Taliban fighters in his own community. Negotiating is not confined to the office and the car dealership. In fact, some of the best dealbrokers have worn fatigues. Jeff Weiss is familiar with negotiating both on and off the battlefield. As a partner at Boston consulting firm Vantage Partners, Weiss helps corporations and executives handle disputes and hammer out better agreements. He also spends a good chunk of every year doing the same thing for cadets at West Point. Weiss has spent the better part of a decade studying battlefield negotiations and figuring out what works and what doesn’t in a hostile foreign country. The goal is for a soldier to forge alliances in unknown territory where every move is being carefully watched, time is of the essence and a faction is very much interested in the soldier’s failure. Hopefully, starting a new job is not as dangerous, but many of the same dynamics are in play in the workplace. The key to thriving in a new environment, according to Weiss, is controlling the nagging sense that you are making a major misstep.
Feedback in Online Course for Non-Native English-Speaking Students
131
Danger, and the fear that it incites, triggers a cavalcade of reactions that could start someone off on the wrong foot, most notably a tendency to rush, make threats and too easily concede vital points to mitigate tension. In other words, it helps to stay calm, yet confident. “Many of us walk around with a default setting and a belief that to be a good negotiator you should use threats, anchoring, bluffing, banging the table and a general show of power,” Weiss said. “Frankly, what I have seen in good negotiators — whether they are a 30-year-old captain in the Army or a 40year old salesman — are folks that say ‘There’s a time and a place to do that, and it’s not often.’” Here are some of the other pieces of advice that Weiss has gleaned from men and women in uniform: 1. Get the Big Picture Get a lay of the land at the outset, particularly the opinions and viewpoints of other parties. In other words, don’t dive in and try striking deals right away. Be humble and curious. 2. Uncover and Elaborate Learn the motivations and concerns behind your counterparts’ opinions. Propose multiple solutions and invite the other parties to improve on them. 3. Elicit Genuine Buy-in Avoid threats. Win others to your side with reasoned arguments, not power plays or brute force. 4. Build Trust First Directly linked to No. 4, this tactic is all about building a foundation of success. Don’t try to ‘buy’ support. Rather, make incremental commitments of good faith. 5. Focus on process Forget about results, or lack thereof. Put your energy into having a healthy and robust discussion free from knee-jerk reactions. After reading the text please answer the following questions: Do any of you have something else to add to that list? How often do you find yourself negotiating in the workplace? Do you think you’re a strong negotiator? What are your bargaining weaknesses? Please reflect by using your own examples and experience to support your arguments/statement.
APPENDIX D BAR GRAPHS OF THE RESULTS ON AUDIO FEEDBACK SURVEY BY THE INSTRUCTORS’ LANGUAGE BACKGROUND AND THE PARTICIPANTS’ LEVELS OF LANGUAGE PROFICIENCY
134
Appendix D
Item One “When using audio feedback, inflection in the instructor’s voice made his / her intent clear”
Feedback in Online Course for Non-Native English-Speaking Students
135
Item Two “The instructor’s intent was clearer when using audio than text”
136
Appendix D
Item Three “Audio comments made me feel more involved in the course than text based comments”
Feedback in Online Course for Non-Native English-Speaking Students
137
Item Four “Audio comments motivated me more than text based comments”
138
Appendix D
Item Five “I retained audio comments better than text based comments”
Feedback in Online Course for Non-Native English-Speaking Students
139
Item Six “Audio comments are more personal than text based comments”
140
Appendix D
Item Seven “Receiving audio comments made me feel as if the instructor cared more about me and my work than when I received text based comments”