Translator and Interpreter Education Research: Areas, Methods and Trends [1st ed.] 9789811585494, 9789811585500

This book provides a detailed introduction and guide to researching translator and interpreter education. Providing an o

369 88 2MB

English Pages XIII, 163 [171] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Translator and Interpreter Education Research: Areas, Methods and Trends [1st ed.]
 9789811585494, 9789811585500

Table of contents :
Front Matter ....Pages i-xiii
Translator and Interpreter Education Research: Definition, Areas, and Historical Developments (Muhammad M. M. Abdel Latif)....Pages 1-12
Translator/Interpreter Training Experimentation Research (Muhammad M. M. Abdel Latif)....Pages 13-37
Translation/Interpreting Learning and Teaching Practices Research (Muhammad M. M. Abdel Latif)....Pages 39-59
Translation and Interpreting Assessment Research (Muhammad M. M. Abdel Latif)....Pages 61-84
Translation/Interpreting Process Research (Muhammad M. M. Abdel Latif)....Pages 85-110
Translation/Interpreting Product Research (Muhammad M. M. Abdel Latif)....Pages 111-123
Researching Professional Translator/Interpreter Experiences and Roles (Muhammad M. M. Abdel Latif)....Pages 125-149
Advancing Translator and Interpreter Education Research (Muhammad M. M. Abdel Latif)....Pages 151-154
Correction to: Translation/Interpreting Process Research (Muhammad M. M. Abdel Latif)....Pages C1-C1
Back Matter ....Pages 155-163

Citation preview

New Frontiers in Translation Studies

Muhammad M. M. Abdel Latif

Translator and Interpreter Education Research Areas, Methods and Trends

New Frontiers in Translation Studies Series Editor Defeng Li, Center for Studies of Translation, Interpreting and Cognition, University of Macau, Macao SAR, China

Translation Studies as a discipline has witnessed the fastest growth in the last 40 years. With translation becoming increasingly more important in today’s glocalized world, some have even observed a general translational turn in humanities in recent years. The New Frontiers in Translation Studies aims to capture the newest developments in translation studies, with a focus on: • Translation Studies research methodology, an area of growing interest amongst translation students and teachers; • Data-based empirical translation studies, a strong point of growth for the discipline because of the scientific nature of the quantitative and/or qualitative methods adopted in the investigations; and • Asian translation thoughts and theories, to complement the current Eurocentric translation studies. Submission and Peer Review: The editor welcomes book proposals from experienced scholars as well as young aspiring researchers. Please send a short description of 500 words to the editor Prof. Defeng Li at Springernfi[email protected] and Springer Senior Publishing Editor Rebecca Zhu: [email protected]. All proposals will undergo peer review to permit an initial evaluation. If accepted, the final manuscript will be peer reviewed internally by the series editor as well as externally (single blind) by Springer ahead of acceptance and publication.

More information about this series at http://www.springer.com/series/11894

Muhammad M. M. Abdel Latif

Translator and Interpreter Education Research Areas, Methods and Trends

123

Muhammad M. M. Abdel Latif Faculty of Graduate Studies of Education Cairo University Giza, Egypt

ISSN 2197-8689 ISSN 2197-8697 (electronic) New Frontiers in Translation Studies ISBN 978-981-15-8549-4 ISBN 978-981-15-8550-0 (eBook) https://doi.org/10.1007/978-981-15-8550-0 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To Dalia (my wife), and to Ayten and Adam (my children)

Introduction

Though I completed my Ph.D. in the area of language education, my interest in translator and interpreter education research has extensively grown over the past few years. About seven years ago, I was nominated to teach a research method course in one academic term to a group of six Ph.D. students at an Arab Gulf university. A few weeks prior to teaching this course, I had to review related literature and write its instructional materials. My language education research background stimulated me to structure the larger part of the topics taught in this course around translator and interpreter education research issues. What drew my attention at that time was the lack of a solid or comprehensive typology of translator and interpreter education research areas (see Chap. 1 of this book). In response to this gap, I developed a draft framework of these areas and included it in the course materials. While teaching further updated editions of the same course during the later academic years, I revised this framework and at a later stage I published in The Interpreter and Translator Trainer (Abdel Latif, 2018; see Chap. 1). Due to the word count restrictions, I could not write all the related detailed ideas in this paper which only provides a brief overview of my 6-area research typology. That is why I decided to write this book which summarizes my 7-year continuous reading of all what is related to translator and interpreter education research. This book provides a detailed description of the following six main areas of translator and interpreter education research: translator/interpreter training experimentation; translation/interpreting learning and teaching practices; translation/ interpreting assessment; translation/interpreting processes; translation/interpreting products; and professional translator/interpreter experiences and roles. The importance of this book lies primarily in its scope and approach to covering and highlighting translator and interpreter education research. The scope of the book is much wider than the previous related works. This is the first book to provide a comprehensive overview of translator and interpreter education research, and it includes important guidelines for researching its six areas. As readers also will see in the chapters of the book, the approach taken in covering these research areas is also different. The focus of the book is not on reviewing relevant research findings but on explaining issues and topics researched vii

viii

Introduction

in each area, and showing how they have been researched. Specifically, I try to explicate the methodological approaches and data sources used in the highlighted exemplary studies belonging to each area. Thus, the book can be regarded as a ‘what and how to research’ guide for those interested in researching the translator and interpreter education field. Meanwhile, the book is also an important resource for translator and interpreter trainers and administrators of translator and interpreter education programmes as it draws their attention to the issues and dimensions that need to be covered in such programmes, and how they are researched or evaluated. The book includes eight chapters. In the introductory chapter, I define translator and interpreter education research, identify its areas, and trace its early historical developments. In Chaps. 2–7, I discuss and review the above-mentioned six areas of translator and interpreter education research. Each chapter starts with an introduction to methodological approaches used in researching each area, and then these approaches are further delineated as I highlight the studies representing each research trend within the same area. In writing the content of these six Chaps. (i.e., Chaps. 2–7), I mainly depended on reviewing the research published in the well-known international translation and interpreting journals, and the relevant books published internationally. In Chap. 8 (the concluding chapter), I summarize the main research trends and issues researched so far in the six areas, and provide some suggestions for advancing future translator and interpreter education research. The book also ends with a glossary of the key terms commonly used in translator and interpreter education research. I hope the contents of the book will meet interested readers’ expectations and needs. The Author Muhammad M. M. Abdel Latif

Contents

1 Translator and Interpreter Education Research: Definition, Areas, and Historical Developments . . . . . . . . . . . . . . . . . . . 1.1 Defining Translator and Interpreter Education Research . . 1.2 Areas of Translator and Interpreter Education Research . . 1.3 Early Studies and Major Historical Developments . . . . . . 1.4 This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 3 6 9 10

2 Translator/Interpreter Training Experimentation Research 2.1 Introduction: Methodological Approaches . . . . . . . . . . . 2.2 Main Types of Training Approaches . . . . . . . . . . . . . . . 2.2.1 Technology-Based Training . . . . . . . . . . . . . . . . 2.2.2 Process-Based Training . . . . . . . . . . . . . . . . . . . 2.2.3 Corpus-Based Training . . . . . . . . . . . . . . . . . . . . 2.2.4 Profession-Oriented Training . . . . . . . . . . . . . . . 2.2.5 Project-Based Learning Training . . . . . . . . . . . . . 2.2.6 Research-Oriented Training . . . . . . . . . . . . . . . . 2.2.7 Other Miscellaneous Training Types . . . . . . . . . . 2.3 Prescriptive Training . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

13 13 15 16 17 20 22 25 26 28 29 31 33

3 Translation/Interpreting Learning and Teaching Practices Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction: Methodological Approaches . . . . . . . . . . . 3.2 Translator and Interpreter Education Policy Reports . . . . 3.3 Translation and Interpreting Programme Evaluation . . . . 3.4 Needs Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Learner Performance Variables . . . . . . . . . . . . . . . . . . . 3.6 Classroom Practices . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

39 39 40 42 45 49 51

ix

x

Contents

3.7 Trainer Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Translation and Interpreting Assessment Research . . . . . . . . . 4.1 Introduction: Methodological Approaches . . . . . . . . . . . . . . 4.2 Surveying Translation and Interpreting Assessment Practices 4.3 Validating Translation/Interpreting Tests . . . . . . . . . . . . . . . 4.4 Identifying the Difficulty Level of the Source Text . . . . . . . . 4.5 Developing Performance Assessment Rubrics . . . . . . . . . . . . 4.6 Examining Rating Practices and Testing Conditions . . . . . . . 4.7 Developing Translation/Interpreting Motivational Scales . . . . 4.8 Investigating User Evaluation/Reception . . . . . . . . . . . . . . . 4.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

5 Translation/Interpreting Cognitive Process Research . . . . . . . . . . 5.1 Introduction: Methodological Approaches . . . . . . . . . . . . . . . . 5.2 Translation Process Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Researching the Translation Process from a Macro Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Researching Translator Use of Language and Information Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Researching Translation Revision . . . . . . . . . . . . . . . . . 5.2.4 Researching Translation Process Problems . . . . . . . . . . 5.3 Interpreting Process Research . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Profiling Interpreter Strategies . . . . . . . . . . . . . . . . . . . 5.3.2 Researching a Particular Interpreting Strategy/Interpreting Processing Dimension . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Translation/Interpreting Product Research . . . . . . . . . . . . . . . . 6.1 Introduction: Methodological Approaches . . . . . . . . . . . . . . 6.2 Researching Translation and Interpreting Quality . . . . . . . . . 6.2.1 Interpreting Quality Studies . . . . . . . . . . . . . . . . . . . 6.2.2 Translation Quality Studies . . . . . . . . . . . . . . . . . . . 6.3 Researching Linguistic and Pragmatic Features in Translated and Interpreted Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Researching Prosodic Features in Interpreter Performance . . . 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 55 56

. . . . . . . . . . .

61 61 62 64 66 68 70 73 74 78 79

.. .. ..

85 85 90

..

90

. . . . .

91 93 95 97 97

. . . . . . . . . . .

. . . . .

. . 101 . . 105 . . 106

. . . . .

. . . . .

. . . . .

. . . . .

111 111 112 113 116

. . . .

. . . .

. . . .

. . . .

117 119 120 121

Contents

xi

7 Researching Professional Translator/Interpreter Experiences and Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction: Methodological Approaches . . . . . . . . . . . . . 7.2 Translator/Interpreter Use of Technology . . . . . . . . . . . . . . 7.3 Correlates of Translator/Interpreter Competence . . . . . . . . . 7.4 Profiling Translator Practices and Roles . . . . . . . . . . . . . . . 7.5 Profiling Interpreter Practices and Roles . . . . . . . . . . . . . . . 7.5.1 Conference Interpreting . . . . . . . . . . . . . . . . . . . . . 7.5.2 Healthcare Interpreting . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Court Interpreting . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Police Investigative Interview Interpreting . . . . . . . . 7.5.5 Interpreting in War-Related Conditions . . . . . . . . . . 7.5.6 Telephone Interpreting . . . . . . . . . . . . . . . . . . . . . . 7.5.7 Sign Language Interpreting . . . . . . . . . . . . . . . . . . 7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Advancing Translator and Interpreter Education Research . 8.1 The Current Status of Translator and Interpreter Education Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Need for Addressing Research Gaps . . . . . . . . . . . . . 8.3 The Need for Using More Rigorous Research Designs . . . 8.4 The Need for Establishing Specialized Research Journals and Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

125 125 126 130 131 133 135 136 138 139 141 142 143 145 146

. . . . . . 151 . . . . . . 151 . . . . . . 153 . . . . . . 153 . . . . . . 154

Glossary of Translator and Interpreter Education Research . . . . . . . . . . 155 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Subject Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

List of Tables

Table 2.1

Table 3.1 Table 4.1 Table 5.1 Table 6.1 Table 7.1 Table 8.1

Overview of the main training types and subtypes experimented/proposed in previous translation and interpreting research . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of the areas and issues in translation/interpreting learning and teaching practices studies . . . . . . . . . . . . . . . . . . Overview of the research areas and issues in translation/ interpreting assessment studies . . . . . . . . . . . . . . . . . . . . . . . . Overview of the research areas and issues in translation and interpreting process studies . . . . . . . . . . . . . . . . . . . . . . . Overview of the research areas and issues in translation and interpreting product studies . . . . . . . . . . . . . . . . . . . . . . . Overview of the research areas and issues in professional translator/interpreter experiences and roles studies . . . . . . . . . Overview of translator and interpreter education research areas and subareas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

31

..

55

..

79

. . 105 . . 121 . . 145 . . 152

xiii

Chapter 1

Translator and Interpreter Education Research: Definition, Areas, and Historical Developments

Abstract In this introductory chapter, the author defines translator and interpreter education research, identifies its areas, and traces its early historical developments. In defining translator and interpreter education research, the author discusses the alternative terms (translator/interpreter training, a translation/interpreting pedagogy, and translation/interpreting teaching) and rationalizes the terminological choice used in the book. Based on reviewing the previous typologies to translator and interpreter education research and explaining their shortcomings, the author identifies the following six main areas of this research: translator/interpreter training experimentation, translation/interpreting learning and teaching practices, translation/interpreting assessment, translation/interpreting processes, translation/interpreting products, and professional translator/interpreter experiences and roles. The author then traces the early historical developments in translator and interpreter education research which dates back to the 1950s. Keywords Translation research · Interpreting research · Translator training · Interpreter training · Translator and interpreter education · Translation pedagogy · Interpreting pedagogy

1.1 Defining Translator and Interpreter Education Research Translator and interpreter education research has increased tremendously in the past few decades. This research has particularly developed in the past two decades and now it is regarded as a prominent field of translation studies. Arguably, such active research on translator and interpreter education has resulted from three main factors. First, the remarkable increase in the institutional translation and interpreting teaching programmes worldwide (Caminade and Pym 2001; Kelly and Martin 2009). According to Baker (2009), the academicization of translator and interpreter education has boomed beyond expectations since the early 1990s. Second, translation and interpreting trainers’ increasing interest in doing research. Third, the research developments made in some other related fields such as applied linguistics, language

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_1

1

2

1 Translator and Interpreter Education Research: Definition …

education, and cognitive psychology. The evolving status of translator and interpreter education research is reflected in launching two international specialized journals (The Interpreter and Translator Trainer and The International Journal of Interpreter Education) in 2007 and 2009, respectively. It is also mirrored in a large number of relevant research reports published annually in other international translation and interpreting journals, and in the regular publication of several monographs and edited books addressing the field and its areas (e.g., Gile 2009; Krawutschke 2008; Li et al. 2019; Tennent 2005; Tsagari and van Deemter 2013). All these developments could pave the way for establishing academic departments and research centers concerned solely with translator and interpreter education research. Two main terms have been commonly used to refer to the translation and interpreting pedagogy field; namely: ‘translator/interpreter training’ and ‘translator/interpreter education’. The differences between the two terms have been discussed in the literature. Drawing on Kiraly’s (2000) view of ‘translation competence’ and ‘translator competence’, Bernardini (2004) distinguishes between ‘translator training’ which focuses mainly on developing the linguistic skills needed for translation competence, and ‘translator education’ which is concerned with enhancing translator linguistic and interpersonal and attitudinal competence. Likewise, Sawyer (2004) conceptualizes ‘interpreter education’ as a broader term than ‘interpreter training’. According to Pöchhäcker (2010), it is generally accepted that ‘translator/interpreter education’ should combine both teaching and research. As Kelly and Martin (2009) summarize it, ‘training’ is a preferred term among those having a vocational orientation in translation/interpreting skill development whereas ‘translator/interpreter education’ is used to indicate skill development within a wider context. On the other hand, Tan (2008) uses ‘translation teaching’ as a term encompassing both training and education. Similarly, Kelly and Martin (2009) point out that the term ‘pedagogy’ is occasionally used to indicate the two dimensions (i.e., translator/interpreter training and education). But as noted, there is greater consensus on the nature of the terms ‘training’ and ‘education’. These views also indicate that ‘translator/interpreter training’ is a part of their education. Given the consensus on the broader nature of the term ‘translator/interpreter education’ and how the author defines the field (see the next section), the term ‘translator/interpreter education research’ will be used in the book to refer to the studies directly or indirectly related to it. An alternative term used in a previous work by the author (Abdel Latif 2018) is ‘pedagogy-oriented translation and interpreting research’. Translator/interpreter education research can be defined as any research relevant to the process of understanding the translation/interpreting trainees’ difficulties, needs, and experiences, and structuring the components of their education programmes. The next section provides a further clarification for this term by defining its areas.

1.2 Areas of Translator and Interpreter Education Research

3

1.2 Areas of Translator and Interpreter Education Research Developing a solid typology of translator/interpreter education research is an important step in advancing it. Such typology should accurately categorize the areas and subareas of translation and interpreting pedagogy research, and reflect the developments made in it so far. In the general typologies mapping translation studies, the areas of translator/interpreter education research are scattered and disconnected. In his early or classical typology, Holmes (1972) differentiated between pure and applied translation studies. Translator training research is listed in the second category (i.e., applied translation studies) along with translation aids, policy, and criticism, whereas the former one includes theoretical (i.e., product-oriented, function-oriented, and process-oriented) and descriptive translation studies. Criticizing Holmes’s typology for its explicit focus on the texts rather than the people producing them, Chesterman (2009) divided translation and interpreting studies into three branches: (a) the cultural branch addressing translators and interpreters’ values, ethics, and ideologies; (b) the cognitive branch dealing with translators and interpreters’ cognitive processes and psychological characteristics; and (c) the sociological branch concerned with translators and interpreters’ social interactions and work processes and tasks. According to Chesterman, the three branches encompass theoretical and descriptive studies, and pure and applied ones. Chesterman coauthored an earlier work with Williams (Williams and Chesterman 2002) in which they provided a map of 11 translation research areas addressed until that time. These areas include: text analysis, quality assessment, genre translation, multimedia translation, translation and technology, translation history, translation ethics, terminology and glossaries, translation cognition, translator training, and translation profession. In her map of translation research, Vandepitte (2008) distinguished three typologies of translation studies based on their purposes, research methods, and the subjects covered. In her subject-based typology, she listed two main categories: (a) process-oriented studies concerned with translation competence and competence development, and professional translation teaching; and (b) comparative and non-comparative discourse-oriented studies addressing issues such as quality assessment, and anthologies and politics of translation. Several maps have also been developed for identifying interpreting research areas. Gile (2000) identified the following seven themes in conference interpreting research: training, consecutive interpreting, and issues related to quality and language, profession and cognition, and other dimensions (i.e., neurophysiological and media interpreting). On the other hand, Pöchhacker (2004) categorized interpreting studies into the following four main areas: (a) process (e.g., simultaneity, comprehension, memory, and strategies); (b) product and performance (e.g., discourse, source-target correspondence, and quality); (c) practice and profession (e.g., history, settings, competence, and technology); and (d) pedagogy (curriculum, selection, teaching, assessment, and meta-level training). Adopting Williams and Chesterman’s (2002) above-mentioned map, Vargas-Urpi (2012) provided a similar map of the main areas in community interpreting research. Her map includes the following areas:

4

1 Translator and Interpreter Education Research: Definition …

interpreting text analysis, quality assessment, interpreting in different contexts, technology, history, ethics, terminology and glossaries, interpreting working conditions, competences, training, and professionalization. Based on their review of 235 papers published in nine major journals in the 2000s, Yan et al. (2013) also reported a map of the following three research areas in interpreting studies: (a) interpreting practice from an ontological perspective (process and product) and from interdisciplinary perspectives (i.e., the cognitive, linguistic, sociological, communicative, ideological and historical perspectives); (b) interpreter training (i.e., philosophies, methods, models, and needs, and learner performance and factors) and assessment (i.e., classroom assessment, and professional accreditations and certifications); and (c) review of interpreting research. Recently, Liu and Zhang (2019) classified healthcare interpreting studies into three categories concerned with interpreter: (a) socio-political background (language policy and access, best practices and guidelines, and history); (b) product-oriented and function-oriented practices; and (c) education and training (for interpreting students and medicine ones). As can be noted, the above typologies of translation and interpreting research have depended on thematic or methodological approaches in categorizing its areas. The place of translator/translator education studies in these maps is not clear enough. On the one hand, they have used the term ‘translator/interpreter training’ in its aboveexplained narrow sense which is concerned with vocational orientation or translation/interpreting skill development. On the other hand, they have not indicated clearly whether the other research areas are related or unrelated to translator/interpreter education. The two maps provided by Pöchhacker (2004) and Yan et al. (2013) are exceptional to some extent because they combine training with assessment in one category. The same trend can be also noted in the thematic analysis reported by Yan et al. (2015) about translator and interpreter training studies published in 10 international journals from 2000 to 2012. In their analysis, Yan et al. categorized translator and interpreter training studies into the following three main areas: (a) teaching (philosophies, methods, models, competence development, needs analysis, and technology use); (b) learning (learner performance and factors); and c) assessment (classroom assessment, and professional accreditation and certification). As noted, Yan et al. (2015) adopted the interpreter training and assessment research categories in Yan et al.’s (2013) map in their thematic analysis of the areas of translator and interpreter training research. In his delineation of what the ‘translator training’ category in Holmes’s (1972) map includes, Munday (2008) also listed teaching methods, testing techniques, and curriculum design. Thus, there might be some consensus that translator/interpreter training or education encompasses these areas only. Though all these above maps have contributed to widening our understanding of the various dimensions translation and interpreting studies cover, there are shortcomings in their conceptualizations of the research areas that can assist in translator/interpreter education. Clearly, translator/interpreter education research is not limited to training, learning, and assessment studies only as suggested by Munday (2008), and Yan et al. (2013, 2015). A more comprehensive typology of translator/interpreter education research should also encompass the studies on three

1.2 Areas of Translator and Interpreter Education Research

5

other categories, namely, translation/interpreting processes and products (i.e., texts), and professional experiences. According to Pym (2011), training can benefit from research on translation process research which provides a developmental view of translators’ cognitive competence, and identifies the skills they need to be trained in. Though Yan et al. (2015) included textual features and errors in the learner performance subcategory of their translation/interpreting training research map, researching these product features is not viewed unanimously as related to translator/interpreter education. In fact, the texts produced by both translation/interpreting students or professionals are of utmost importance to their training because they can show us the linguistic and discoursal features that need to be used or avoided when performing translation/interpreting tasks. Both translation/interpreting trainees and trainers need also to be informed about professionals’ experiences. Ho (2004) views that translator education programmes should be closer to the translation profession and professional translators to help students gain insights from them and learn about their practical experiences. Therefore, a part of translator/interpreter training should be geared toward fostering trainees’ professional competences and awareness. The easiest way to accomplish this is to consider what research says about the professional experiences and practices in the two fields. As will be explained in the following chapter, profession-oriented training has already started to receive a growing attention in translation and interpreting studies. In light of the above, the studies addressing these three research areas (i.e., processes, products, and professional experiences) are basically types of translator/interpreter education research. As the readers go through the following chapters in the book, they will gradually realize the strong relevance of all these six areas to structuring translator/interpreter education programmes in an optimal way. In line with the above, the book will adopt the pedagogy-oriented translation and interpreting research framework developed by Abdel Latif (2018). This framework or map is composed of the following six main areas of translator/interpreter education research: – Translator/interpreter training experimentation research: the studies experimenting or prescribing particular pedagogical techniques or curricula. – Translation/interpreting learning and teaching practices research: the studies evaluating existing training or education policies or programmes, and translation and interpreting learning and teaching practices. The features evaluated by these studies are not limited to the curriculum or training programme delivery but they also include other issues such as learner practices or teacher experiences. – Translation/interpreting assessment research: the studies addressing the translation and interpreting assessment issues, including: translation and interpreting test/scale development and validation, quality assessment and user expectations and evaluation, inter-rater reliability, source text difficulty, and trainer assessment literacy. – Translation/interpreting process research: the studies dealing with the mental macro- and micro-processes and cognitive problems involved in translation and interpreting.

6

1 Translator and Interpreter Education Research: Definition …

– Translation/interpreting product research: the studies profiling and analyzing the linguistic and discoursal features and/or errors in the texts rendered from one language into another. – Professional translator/interpreter experiences and roles research: the studies addressing the issues related to professional translator and interpreter experiences, practices, and work perceptions. The issues include translator and interpreter job roles and tasks, work habits, and what facilitates or hinders their work. Two of the above six areas (training effectiveness/experimentation and translation/interpreting learning and teaching practices) are directly related to translator/interpreter education research, while the other four ones are indirectly related to it. All these six areas can inform translation/interpreting pedagogy because researching them can help us know and understand one or more of the following issues: – The innovative and reliable techniques in translator/interpreter training and their effectiveness; – Stakeholders’ perceptions of the training provided and its weaknesses and strengths; – Good practices and measures for assessing trainees’ performance, and the criteria meeting translation/interpreting user expectations; – Trainees’ performance problems at the process and product levels; and – Translation/interpreting trainees’ potential workplace needs. Some readers may raise a question about the place technology in these six translator/interpreter education research areas. The answer to this question is that technology use is covered in each of these six areas. For example, some training experimentation studies (the first research area) focus on improving translation performance by using technological tools. Likewise, learning and teaching practices research and professional experiences studies (the second and sixth research areas) deal with technology integration in translator and interpreter training or technology use by translators/interpreters in their workplaces. A few assessment studies (the third research area) have also started to integrate technology in translation and interpreting testing. Finally, process and product studies (the fourth and fifth research areas) also make use of technological data sources and tools in researching translator/interpreters’ processes and texts (for example, eye-tracking, keystroke logging, and corpora). In short, technology use research is embedded in each of these six areas.

1.3 Early Studies and Major Historical Developments Given that the above-mentioned six research areas are pedagogy-oriented and related directly or indirectly to translator/interpreter education, it follows that the process of tracing the history of translator/interpreter education research should cover all the studies belonging to these areas rather than the ‘translator/interpreter training

1.3 Early Studies and Major Historical Developments

7

experimentation’ ones only. Some forms of translator/interpreter education research occurred in the 1950s, and developed gradually in 1960s, 1970s, and 1980s. During the 1950s, 1960s, and 1970s, we can note some few empirical translation product and interpreting process studies. The earliest published training-oriented translation study is perhaps the one reported by Ervin and Bower (1952), who examined the problems and challenges occurring in translating the items of the questionnaires used internationally. In the same decade, Zuther (1959) conducted another translation study of training orientations. In his study, he looked at the problematic features in translating modern American dramas into German. Brislin (1969, 1970) also reported another early training-oriented product work in which he used five equivalence criteria to evaluate the translated and back-translated texts produced by 94 bilinguals representing ten languages. An early interpreting product study was also reported by Barik (1971) on the types of omissions, additions, and errors made by simultaneous interpreters. Another translator/interpreter education area that received early research attention is the interpreting process. The earliest academic research work dealing with this area is the MA study completed by Paneth (1957), who described the processes used by conference interpreters. As indicated in the title of her work ‘An investigation into conference interpreting (with special reference to the training of interpreters)’, Paneth also provided implications for interpreter training. In her review of Paneth’s work, Hanna (1958) wrote [Paneth’s] study of consecutive and simultaneous interpretation is based on her own work as an interpreter, as well as upon her observation of the work of other interpreters in conferences held both in England and abroad. She has studied rather closely existing techniques both from the point of view of efficiency and style, as well as from that of the interpreter. Hers is therefore a study, not only of interpretation, but of the interpreter as well. (p. 35)

Influenced by the advances in cognitive psychology and oral communication studies, a further development in translator/interpreter education research occurred in the late 1960s which saw other interpreting process research works reported by Kade (1968), Barik (1969) and Pinter (1969). The 1970s also witnessed the publication of a number of empirical interpreting process research reports (e.g., Barik 1972, 1973; Gerver 1971, 1976; Moser 1978). Following these early developments, published interpreting process studies occurred regularly in the 1980s and later on the area gained the interest of a large number of researchers worldwide (for a review see Gile 2000). On the other hand, empirical translation process research occurred in the mid-1980s. Early studies of this type include those reported by Krings (1986) and Gerloff (1988). The past two decades saw the publication of an increasing number of studies on translating and interpreting cognitive processes, along with some relevant volumes (e.g., Hansen-Schirra et al. 2017; Li et al. 2019; Schwieter and Ferreira 2017). All these works have important implications to translation and interpreting teaching. As for the translator/interpreter training experimentation research area, the early writings addressing it were mainly prescriptive, i.e. just telling translators/interpreters

8

1 Translator and Interpreter Education Research: Definition …

trainers how to train their students, or telling translators/interpreters how to develop their own competences. We can note this early prescriptive trend, for example, in the three papers published in Meta by Horn (1966), Schmit (1966), and Citroen (1966), titled: ‘A college curriculum for the training of translators and interpreters in the USA’, ‘The self -taught translator: from rank amateur to respected professional’, and ‘Targets in translator training’, respectively. An early translator training experimentation study was published in the Modern Language Journal by Curtin et al. (1972), who experimented teaching the translation of Russian using the computer. It seems that translator/interpreter training effectiveness research written in English started to occur regularly only in the 1990s at both the graduate studies and international publication levels. The 1970s also saw the publication of what may be regarded as early translator/interpreter education programme evaluation reports. The reports written by Popoviˇc (1975), Ferenczy (1977), and Wilss (1977) are examples of these few early attempts. In fact, these were not standardized reports providing an analysis of a set of data, but they included just reflections on and evaluations of translator/interpreter education practices in a particular context. For example, Wilss (1977) in his published paper in Meta described the process of developing and delivering a translator/interpreter education programme in West Germany and its challenges. The early 1980s also witnessed the publication of an early translator/interpreter education policy report written by Coveney (1982). In this report, Coveney described and evaluated the status of translator and interpreter education and research practices at UK universities. In many of translation/interpreting learning and teaching evaluation studies reported in the 1980s, we can note a move toward a more standardized version of this research area where some data set was collected and analyzed. This can be noted particularly in the very few MA and Ph.D. studies addressing the area during the early 1980s (e.g., Díaz 1983; Foltz 1984; Robertson-Bates 1980). Research on translation/interpreting learning and teaching practices increased gradually during the 1980s, and developed in the past three decades. Establishing a number of translation and interpreting journals has contributed significantly to fostering research on the above-mentioned translator/interpreter education areas. Before the late 1980s, Babel and Meta—launched in 1955 and 1966, respectively—were the only available international journals with a main interest in publishing translator/interpreter education research in English. The late 1980s saw the establishment of two other international translation and interpreting journals: The Interpreters’ Newsletter (1988) and Target (1989). Three other international journals were launched in the 1990s: Perspectives (1993), The Translator (1995), and Interpreting (1996), while the following seven journals occurred in the 2000s and 2010s: Across Languages and Cultures (2000), The Journal of Specialised Translation (2004), Translation and Interpreting Studies (2006), The Interpreter and Translator Trainer (2007), and The International Journal of Interpreter Education (2009), Translation Spaces (2012), and Translation, Cognition & Behavior (2018). These 14 journals have published a large number of studies belonging to

1.3 Early Studies and Major Historical Developments

9

different translator/interpreter education research areas. Needless to say, the technological and communication revolution has greatly impacted such an evolving translator/interpreter education research movement.

1.4 This Book This book provides an overview of translator and interpreter education research areas. Specifically, the book discusses the research methods and data sources used in each area, and overviews the research trends and topics dominant in it. Thus, the book can be regarded as a ‘what and how to research’ guide for those interested in researching the translator and interpreter education field. In other words, the book focuses on describing the areas and subareas of translator and interpreter education research, providing examples of the studies representing each, and explaining the methodological approaches used in them. Meanwhile, the book is also an important resource for translator and interpreter trainers and administrators of translator and interpreter education programmes as it draws their attention to the issues and dimensions that need to be covered in such training and programmes, and how they are researched or evaluated. Of all the previously published books addressing translator and interpreter education, those focusing solely on relevant research issues are very few (e.g., Angelelli and Jacobson 2009; Li et al. 2019). The scope of the book is much wider, given that it covers research on six main areas pertinent to translator and interpreter education. As will be shown in the following chapters, the approach taken by the book in covering these research areas is also different. Since the book covers these six areas in both translator and interpreter education research fields, we will be able to identify the progress made in both translation and interpreting research in the one area, and to see the similarities and differences in the research topics and issues addressed in each field. Following this introductory chapter, the six translator and interpreter education research areas explained above will be covered in Chaps. 2–7. In writing the content of these six chapters, the author has mainly depended on reviewing the research published in the well-known international translation and interpreting journals, and the relevant books published internationally. Chapter 2 deals with training experimentation research. The chapter discusses the types of training experimented or proposed in this research, and cites examples showing the data sources and research methods used in each. Chapter 3 covers the research issues investigated in the studies concerned with translation and interpreting learning and teaching practices. Chapter 4 addresses the translation and interpreting assessment research area and its subareas. As for Chap. 5, it is devoted to the methods and issues in the research on the cognitive processes involved in translation and interpreting. On the other hand, Chap. 6 deals with the issues related to researching the translated and interpreted texts, and discusses the multiple textual features researched so far in translation and interpreting product studies. The professional experience research is

10

1 Translator and Interpreter Education Research: Definition …

covered in Chap. 7 which focuses on profiling research on professional translator and interpreter experiences, roles, and perceptions. Each of these six chapters ends with some conclusions about the status of the research subareas within the one area, and the thematic and methodological issues noted in each. The concluding chapter in the book (Chap. 8) provides some suggestions for advancing future translator and interpreter education research, and addressing the thematic, contextual, and methodological gaps in it. The book also ends with a glossary of the key terms commonly used in the areas of translator and interpreter education research.

References Abdel Latif, M.M.M. 2018. Towards a typology of pedagogy-oriented translation and interpreting research. The Interpreter and Translator Trainer 12 (3): 322–345. https://doi.org/10.1080/175 0399X.2018.1502008. Angelelli, C.V., and H.E. Jacobson. 2009. Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice. Amsterdam: John Benjamins Publishing Company. Baker, M. 2009. Introduction to the first edition. In Routledge encyclopedia of translation studies, ed. M. Baker and G. Saldanha, xiv–xix. London: Routledge. Barik, H.C. 1969. A study of simultaneous interpretation. Unpublished doctoral dissertation, University of North Carolina. Barik, H.C. 1971. A description of various types of omissions, additions and errors of translation encountered in simultaneous interpretation. Meta 16: 199. Barik, H.C. 1972. Interpreters talk a lot, among other things. Babel 18 (1): 3–10. Barik, H.C. 1973. Simultaneous interpretation: Temporal and quantitative data. Language and Speech 16 (3): 237–270. Bernardini, S. 2004. The theory behind the practice. Translator training or translator education?’ In Translation in undergraduate degree programmes, ed. Kirsten Malmkjær. Amsterdam and Philadelphia: Benjamins. Brislin, R.W. 1969. Back-translation for cross-cultural research. Ph.D. dissertation, The Pennsylvania State University, USA. Brislin, R.W. 1970. Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology 1 (3): 185–216. https://doi.org/10.1177/135910457000100301. Caminade, M., and A. Pym. 2001. Translator-training institutions. In Routledge encyclopedia of translation studies, ed. Mona Baker, 280–285. London: Routledge. Chesterman, A. 2009. The name and nature of translator studies. Hermes: Journal of Language and Communication Studies 42 (2): 13–22. Citroen, I.J. 1966. Targets in translator training. Meta 11 (4): 139–144. https://doi.org/10.7202/002 246ar. Coveney, J. 1982. The training of translators and interpreters in the United Kingdom. Multilingua Journal of Cross-Cultural and Interlanguage Communication 1 (1): 42–45. Curtin, C., D. Clayton, and C. Finch. 1972. Teaching the translation of Russian by computer. Modern Language Journal 56: 354–360. Díaz, C. 1983. Teaching translation in the Venezuelan context: General observations, theoretical considerations and exercises. MA thesis, University of Ottawa, Canada. Ervin, S., and R.T. Bower. 1952. Translation problems in international surveys. Public Opinion Quarterly 16: 595–604. https://doi.org/10.1086/266421.

References

11

Ferenczy, G. 1977. Postgraduate short-term training of interpreters and translators at the faculty of liberal arts, Budapest University. Babel 23 (4): 181–182. https://doi.org/10.1075/babel.23.4. 08fer. Foltz, D. 1984. Translation training in Spanish in colleges and universities in Pennsylvania: A needs assessment. Ph.D. dissertation: University of Pittsburgh, USA. Gerver, D. 1971. Aspects of simultaneous interpretation and human information processing. D. Phil. thesis, Oxford University. Gerver, D. 1976. Empirical studies of simultaneous interpretation: A review and a model. In Translation, ed. R. Brislin, 165–207. New York: Gardner Press. Gile, D. 2000. The history of research into conference interpreting: A scientometric approach. Target 12 (2): 297–321. https://doi.org/10.1075/target.12.2.07gil. Gile, D. 2009. Basic concepts and models for interpreter and translator training. Amsterdam: John Benjamins Publishing Company. Gerloff, P. 1988. From French to English: A look at the translation process in students, bilinguals, and professional translators. (Unpublished dissertation). Cambridge (MA): Harvard University. Hanna, B. 1958. Review of [Paneth, Eva, An Investigation into Conference Interpreting (With Special Reference to the Training of Interpreters). Thesis for the Degree of M.A. in Education, London University, April 1957. 160 p.] Journal des traducteurs/ Translators’ Journal, 3 (1): 35–38. https://doi.org/10.7202/1061456ar. Hansen-Schirra, S., O. Czulo, and S. Hofmann. 2017. Empirical modelling of translation and interpreting. Berlin: Language Science Press. https://doi.org/10.5281/zenodo.1090974. Ho, G. 2004. Globalization and translation: Towards a paradigm shift in translation studies. Ph.D. Dissertation, University o f Auckland, New Zealand. Holmes, J.S. 1972. The name and nature of translation studies. Amsterdam: Translation Studies Section, Department of General Studies. Reprinted in L. Venuti (Ed.). 1987. The translation studies reader, 180–192. New York: Routledge. Horn, S.F. 1966. A college curriculum for the training of translators and interpreters in the USA. Meta 11 (4): 147–154. https://doi.org/10.7202/003130ar. Kade, O. 1968. Zufall und Gesetzmässigkeit in der Übersetzung, Leipzig. Kelly, D., and A. Martin. 2009. Training and education. In Routledge encyclopedia of translation studies, ed. M. Baker and G. Saldanha, 294–300. London: Routledge. Kiraly, D. 2000. A social constructivist approach to translator education. Empowerment from theory to practice. Manchester: St. Jerome. Krawutschke, P.W. 2008. Translator and interpreter training and foreign language pedagogy. Amsterdam: John Benjamins Publishing Company. Krings, H.P. 1986. Was in den Köpfen von Übersetzern vorgeht. Eine empirische Untersuchung der Struktur des Übersetzungsprozesses an fortgeschrittenen Französischlernern. Tübingen. Li, D., V.C. Lei, and Y. He. 2019. Researching cognitive processes of translation. Germany: Springer. Liu, Y., and W. Zhang. 2019. Unity in diversity: mapping healthcare interpreting studies (2007– 2017). Medical Education Online 24: 1–12. https://doi.org/10.1080/10872981.2019.1579559. Moser, B. 1978. Simultaneous interpretation: A hypothetical model and its practical application, ed. David Gerver, Sinaiko, H. Wallace, 353–368. Munday, H. 2008. Introducing translation studies. London/New York: Routledge. Paneth, E. 1957. An investigation into conference interpreting (with special reference to the training of interpreting). MA Dissertation, University of London. Pinter, I. 1969. Der Einfluß der Übung und Konzentration auf simultanes Sprechen und Hören. Unpublished doctoral dissertation, University of Vienna, Austria. Pöchhacker, F. 2004. Introducing interpreting studies. London and New York, NY: Routledge. Pöchhäcker, F. 2010. The role of research in interpreter education. Translation & Interpreting 2 (1): 1–10. Popoviˇc, A. 1975. Activities of Slovak translators and Slovak translation studies. Neohelicon 3 (3/4): 107–112.

12

1 Translator and Interpreter Education Research: Definition …

Pym, A. 2011. Translator training. In The Oxford handbook of translation studies, ed. K. Malmkjær, and K. Windle. Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/978019923 9306.013.0032. Robertson-Bates, B. 1980. Translation in modern language courses and translation for professional training in Canada. MA thesis, University of Ottawa, Canada. Sawyer, D.B. 2004. Fundamental aspects of interpreter education: Curriculum and assessment. Amsterdam/Philadelphia: John Benjamins. Schmit, C. 1966. The self-taught translator: From rank amateur to respected professional. Meta 11 (4): 123–126. https://doi.org/10.7202/004045ar. Schwieter, J.W., and A. Ferreira. 2017. The handbook of translation and cognition. Hoboken: Wiley Blackwell. Tan, Z. 2008. Towards a whole-person translator education approach in translation teaching on university degree programmes. Meta 53 (3): 589–608. https://doi.org/10.7202/019241a. Tennent, M. 2005. Training for the new millennium pedagogies for translation and interpreting. Amsterdam: John Benjamins Publishing Company. Tsagari, D., and R. van Deemter. 2013. Assessment issues in language translation and Interpreting. Frankfurt: Peter Lang. Vandepitte, S. 2008. Remapping translation studies: Towards a translation studies ontology. Meta 53 (3): 569–588. https://doi.org/10.7202/019240ar. Vargas-Urpi, M. 2012. State of the art in community interpreting research: Mapping the main research topics. Babel 58 (1): 50–72. https://doi.org/10.1075/babel.58.1.04var. Williams, J., and A. Chesterman. 2002. The map. A beginner’s guide to doing research in translation studies. Manchester: St. Jerome Publishing. Wilss, W. 1977. Curricular planning. Meta 22 (2): 117–124. https://doi.org/10.7202/004611ar. Yan, J., J. Pan, H. Wu, and Y. Wang. 2013. Mapping interpreting studies: The state of the field based on a database of nine major translation and interpreting journals (2000–2010). Perspectives: Studies in Translatology 21 (2): 446–473. https://doi.org/10.1080/0907676x.2012.746379. Yan, J., J. Pan, and H. Wang. 2015. Studies on translator and interpreter training: A data-driven review of journal articles 2000–12. The Interpreter and Translator Trainer 9 (3): 263–286. https:// doi.org/10.1080/1750399X.2015.1100397. Zuther, G.H.W. 1959. Problems in translation: Modern American dramas in German. Ph.D. dissertation: Indiana University, USA.

Chapter 2

Translator/Interpreter Training Experimentation Research

Abstract This chapter discusses translator/interpreter training experimentation research. This research type is concerned with trying out a specific instructional technique and examining trainees’ responses to it. The chapter starts with discussing the methodological orientations in translator/interpreter training experimentation research. Following this, the author highlights six main types of the training experimented extensively in previous translation/interpreting research. These are: technology-based training, process-based training, corpus-based training, profession-oriented training, project-based learning training, and research-oriented training. Additionally, the author also refers to some miscellaneous training types and to prescriptive translator/interpreter training works. Exemplary studies representing all these training types are highlighted with a particular focus on the research methodological aspects in them. The chapter ends with summarizing the research trends in this area and providing suggestions for future relevant research. Keywords Translation research · Translator training · Interpreter training · Translator training · Effectiveness research · Prescriptive translator training · Project-based learning

2.1 Introduction: Methodological Approaches Translator and interpreter training experimentation studies attempt to try out or suggest a specific instructional technique or a type of training syllabus, and to investigate or show how it influences trainees’ performance or the way they respond to it. Thus, these studies aim at improving translator and interpreter performance directly by experimenting or proposing a given instructional treatment. Other names used to describe this type of studies in the general educational field include ‘effectiveness research’ (Felix 2005, 2008) and ‘interventional research’. This chapter will highlight the research methods and data sources used in these studies, and the types of training approaches experimented so far. As the readers will note in the following section, various research methods and data sources have been used in previous translator and interpreter training

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_2

13

14

2 Translator/Interpreter Training Experimentation Research

experimentation studies. Methodologically speaking, all these studies can be categorized into three groups: (a) post-training assessment studies relying on assessing trainees’ performance after the training; (b) action research studies in which the teacher systematically observes and assesses the impact of the training during several stages of the study; and (c) pre-post training assessment studies testing trainees’ performance before and after the instructional experiment. Obviously, post-training assessment and action research studies have been much more dominant than pre–post training assessment ones in previous translator and interpreter training experimentation research. As will be seen, some post-assessment training studies depended on quantitative data sources only (e.g., López-Garcia and Rodríguez-Inés 2019), while other ones relied on the mixed method design involving the use of both quantitative and qualitative data sources (e.g., Galán-Mañas and Albir 2010). Action research is viewed as an ideal methodology for translator and interpreter training experimentation research (Hatim 2013). It is an empirical process through which the teacher systematically observes and assesses the outcome of introducing a novel instructional treatment to students (Burns and Westmacott 2018). With respect to translation and interpreting studies, Hatim (2013) defines action research as follows: Action research (i.e. practitioner reflective research) … operates in the context of practical problems which affect all those involved. In the present context, problem solving will be aimed at as a means of upgrading the general quality of performance among novice and professional translators, and among learners and teachers of translation. This is basically achieved by linking knowledge and expertise available ‘out there’ with the practical experience which researchers bring to the task. (p. 201).

Action research involves four iterative stages which are: (a) planning a course of action; (b) implementing the plan; (c) observing the outcome; and (d) reflecting on the research conducted (Kemmis and McTaggart 1988). Hatim (2013) provided a more detailed framework of action research by viewing it as encompassing the following steps: identifying a problem, investigating it, understanding the problem by evaluating and consolidating the data, identifying the potential causes and their solutions, predicting outcomes of a planned action, selecting a plan and putting it into action, and evaluating the action. In previous translator and interpreter training experimentation studies drawing on action research, we can see that some studies depended solely on qualitative data (e.g., Haro-Soler and Kiraly 2019), and others using the mixed method design (Pan 2016). Besides, some action research studies depended on data triangulation (e.g., Sachtleben and Heather 2011), which refers to using more than one data source to study the same variable(s). Different types of pre–post training assessment treatments have been used in previous translator and interpreter training experimentation studies. These types are: (a) one-experimental group treatments (e.g., Ko 2008; Maddux 2018); (b) experimental-versus-control/comparison group treatments (e.g., Moreno et al. 2011); (c) two experimental-versus-one control/comparison group treatments (e.g., Yenkimaleki and Van Heuven 2019); and (d) three experimental group treatments (e.g., Angelone 2013).

2.1 Introduction: Methodological Approaches

15

In addition to the post-training assessment, the formative assessment used in action research studies and the pre-/post-testing implemented in quasi-experimental research, there are some other assessments that have been used in a few studies, including: (a) pre-diagnostic assessment; (b) training or instructional material piloting assessment; and (c) delayed post-assessment. The pre-diagnostic assessment was used, for example, in Galán-Mañas and Albir’s (2010) study to identify trainees’ learning needs and prior knowledge. According to Galán-Mañas and Hurtado Albir (2015), pre-diagnostic assessment enables trainers to identify realistic learning objectives appropriate to trainees’ characteristics, and knowledge and skill levels. Some studies (e.g., Hunt-Gómez and Moreno 2015; López-Garcia and Rodríguez-Inés 2019) assessed the suitability of the training materials used by piloting them and getting participants in the piloting experiment to respond to an assessment tool—normally a questionnaire—to identify its weaknesses and strengths. This piloting assessment is usually followed by modifying the training materials in light of the responses obtained from participants. The study reported by Schrijver et al. (2016) provided an example of using a delayed post-assessment. They first conducted an immediate post-test, and then their participants completed a delayed post-test three weeks after the training. It is worth mentioning that this delayed post-assessment is not very common in translator and interpreter training experimentation studies. Assessment in translator and interpreter training experimentation studies can also be categorized in terms of who performs it and the data sources used. With regards to who performs the assessment, Galán-Mañas and Albir (2015) state that it can be categorized into three types: self-assessment, peer assessment, or hetero-assessment made normally by someone who has higher knowledge or skill levels than trainees (e.g., the trainer or teacher). As for the data sources used in such studies, Galán-Mañas and Albir (2015) listed performance tasks, questionnaires, reflective diaries, written reports, process recordings, and rubrics. In addition to these data sources, we can also note the use of other ones including: interviews, teacher logs/journals, and teacher observation. In the following section and its subsections, examples of the training experimentation studies representing these different methodological approaches, assessment procedures, and data sources will be presented. The readers will note how these training experimentation studies were conducted, which training type was provided, how long the training lasted, and what data sources were used to evaluate the training. Thus, such research reviewing approach aims at helping the readers develop a deeper understanding of how to conduct translator and interpreter training experimentation studies.

2.2 Main Types of Training Approaches The translator and interpreter training experimentation studies published in the past two decades have experimented various instructional techniques to improve trainees’ translation or interpreting performance. Regardless of the research methods used,

16

2 Translator/Interpreter Training Experimentation Research

these studies can be grouped into the following six categories depending on the training approaches they have followed: technology-based training, process-based training, corpus-based training, profession-oriented training, project-based learning training, and research-oriented training. In addition to these six categories, there are some other miscellaneous training types. These categories are explained and discussed in the following subsections.

2.2.1 Technology-Based Training Technology-based training refers here to the type of instruction that aims at helping trainees make use of a technological environment or digital tool to improve their translation or interpreting competences or complete related tasks efficiently. Overall, the studies using this training type try to prove the importance of technological tools and environments to fostering translators and interpreters’ performance. In translator training, Galán-Mañas and Albir (2010) tested the effectiveness of using a blended learning mode in teaching two translation units. Prior to structuring the two units, they conducted a diagnostic assessment to identify the trainee students’ knowledge and skill needs. Their research was carried out in two phases: (a) the pilot phase in which they tested the design of the different tasks, activities, and materials of the two units; and (b) the experimental phase in which the data was collected and the effectiveness of the blended learning environment was tested. The effectiveness of the two units was measured after the training using a student competence selfassessment questionnaire, a teaching assessment questionnaire, teachers’ reflective diary, and the students’ marks on the translation learning tasks. The study reported by López-Garcia and Rodríguez-Inés (2019) is an example of the technology-oriented training in the audiovisual translation field. The two researchers described a teaching unit in which they used a corpus-based script analysis method to enable trainees to cope with the challenges of audiovisual translation. This teaching unit was piloted twice (on-line and on-site) and its pilot experimentation was evaluated by getting the students to respond to three questionnaires assessing their evaluation of the unit and its content. In light of the students’ evaluations of the pilot versions of the unit, some changes were made to its structure and tasks. After implementing the unit in five 2-h sessions, the two researchers assessed its efficacy using a questionnaire tapping the students’ perceived ability to apply the skills and knowledge acquired. It is noted that more published interpreting studies have made use of technologybased training. For example, Hansen and Shlesinger (2007) used some digital tools and equipment to enhance the consecutive interpreting teaching in Copenhagen. Ko and Chen (2011) also reported a study in which they compared teaching sight translation (the interpreting of a written text) via a synchronous cyber classroom and a traditional face-to-face teaching. In another study, Ko (2008) experimented teaching liaison interpreting (i.e., dialogue and consecutive interpreting and sight translation) via a distance mode as compared to face-to-face teaching. The training provided in

2.2 Main Types of Training Approaches

17

this study lasted for 13 3-h weeks (i.e., 39 h). The data of the study was collected through: a pre-training test, final exam scores, an interpreting test, student and teacher weekly diaries, and end-of-programme questionnaire and interview. Roush (2010) also reported a pilot study on testing prototype annotation software features used for American sign language (ASL) and English interpreting. In this study, the students used the annotation software during a 3-day seminar and then provided their evaluative comments of it. On the other hand, Moreno et al. (2011) used web-based training to improve the interpreting skills of dual-role interpreters, i.e., bilingual administrative or clinical staff members who have to perform interpreting roles. They used a pre–post-training design with two groups (the experimental or intervention group and the control or comparison one). The pre–post-assessments focused on measuring interpreting knowledge and self-confidence. The participants in the intervention group were assigned five modules to complete at their own individual pace. Using the interactive learning tools of the web-based environment, the intervention group participants studied a number of issues, including communication skills and cultural competence in interpreting, medical vocabulary, and the interpreter’s roles, boundaries, and responsibilities when acting as a member of the patient–provider– interpreter triad. On the other hand, the control group did receive this web-based instruction but just took the pre–post assessments. A final but a growing trend in technology-based training is related to the impact of using machine translation applications and computer-assisted translation (CAT) tools on developing students’ performance. Rodríguez-Castro (2018) described the implementation and learning outcomes of a graduate CAT-based course aiming at developing technical expertise of translation students. The students’ responses to the course were assessed using five CAT tasks and a brief online questionnaire. In June 2018, a conference titled ‘Google Translate & Modern Languages Education’ was held at the University of Nottingham. In this conference, some papers were presented on using Google Translate in developing students’ translation performance (e.g., Lee and Sun 2018). As will be noted in the following chapters, however, most empirical studies addressing the use of machine translation focused mainly on its actual uses by translation students or professionals.

2.2.2 Process-Based Training The process-based training studies focused primarily on developing translator/interpreter performance through raising their awareness of effective translation/interpreting processes or improving some aspects of such processes. In some of these studies, process-oriented training aimed at enhancing product features (e.g., reducing translation errors), while it focused on process features in other studies. As will be noted in the paragraphs below, process-oriented training is more common in translation studies than in interpreting ones. A main trend in translation process-oriented training studies is making use of students’ reflections to raise their awareness and foster their evaluation of

18

2 Translator/Interpreter Training Experimentation Research

such processes. Some studies trained students in reflecting on the whole translation processes. In a German–Spanish general translation course, Fernández and Zabalbeascoa (2012) attempted to help undergraduate students develop their strategic sub-competence through using pre–translation, post-translation, and endmodule metacognitive questionnaires. The pre-translation questionnaires given to the students prior to performing the translation task were structured in a way helping them plan for the pragmatic features in source and target texts, and to expect translation problems and their solutions. The post-translation questionnaires were completed after finishing each translation task, which were aimed at helping students to evaluate their translation processes by answering questions reflecting their strategies in translating the text, and their translation problems and how they solved them. During the course, the students completed five tasks and each task was accompanied by answering these pre- and post-translation questionnaires. As for the end-module questionnaire, it focused on getting the students to evaluate their learning. Fernández and Zabalbeascoa found that these metacognitive questionnaires promoted the students’ ability to evaluate their translation performance and identify their translation problems. Norberg (2014) tried to foster undergraduate students’ understanding of their translation processes and problems by getting them to write self-reflections on them using retrospective and guided commentaries. While studying at Stockholm University, six students had to submit a number of translation assignments with their reflective commentaries in two consecutive terms. Through these commentaries, Norberg attempted to use optimally designed instructions suitable for the students’ declarative and procedural knowledge levels and to stimulate their self-reflections and self-awareness. These commentaries were guided by the teacher’s instructions. The Norberg noted that the students provided more systematic and evaluative descriptions of their translation processes in the second term commentaries as compared to the first term ones which focused on superficial aspects of such processes. On the other hand, Latorraca (2018) used the thinkaloud method as an observational learning tool in translation instruction. Twenty-one students in an Italian translation class attended visual think-aloud translation sessions and then performed think-aloud translation tasks. Latorraca assessed the students’ self-evaluation of competences and awareness of translation processes using preand post-test questionnaires. Another main trend in translation process-oriented training studies is developing translator performance by focusing on their revisions only. For example, Angelone (2013) compared the effectiveness of three translation process tools in reducing the errors in students’ revised translated texts. The three revision tools used by the students in Angelone’s study were: (a) problem and decision reporting logs in which students report the translation problems encountered and the strategies used to solve them, (b) recorded verbalizations, and (c) screen recordings. Angelone found that the screen recording was the most effective method in reducing translation errors. In the Belgian context, Robert et al. (2018) examined how translation revision and editing training influence graduate students’ fairness and tolerance while revising others’ translated texts. They assessed these differences using two revision tasks and one questionnaire in each of the pre- and post-testing conditions. The experimental group

2.2 Main Types of Training Approaches

19

students attended a 13-week course in translation revision and editing, whereas the control group students completed the pre- and post-test assessments without taking this course. Apart from these two main trends, other experimented instructional techniques are rare in translation process-oriented training studies. One of these rare training techniques is given in the study reported by Schrijver et al. (2016), who examined the effectiveness of writing training on students’ translation processes and products. Specifically, they trained the students in ‘transediting’ which means rewriting or re-ordering source text for communication and message transmission purposes. Over six weeks, nine students in the experimental group received writing training in editing and composing instructive texts which required the use of transediting. As for the control group students, they received placebo training through which they were allowed to interact with the texts but without having experiences leading to fostering their writing skills. Schrijver et al. tested the differences between the performance of the students in the experimental and control groups using a pre-test (one task), an immediate post-test (two tasks), and a delayed post-test (one task) conducted three weeks after the training. All the four translation tasks in these tests required the students in both groups to translate Spanish instruction manuals into Dutch (their mother tongue). On the other hand, not many interpreting studies seem to have incorporated process-training. Due to the nature of the interpreting process, it is perhaps difficult to teach many of its strategies explicitly. Empirical studies indicate the teachability of a few interpreting strategies, including note-taking, source attribution, and nonverbal communication. Chmiel (2012) investigated the effect of training interpreting students in note-taking. The students in this study attended a note-taking course, and after completing it they were asked to consecutively interpret a 10-min presentation into their B language and submit the notes they used during the task, and to complete a follow-up questionnaire about the mnemonics they used to remember specific excerpts and their use of particular note-taking symbols. The students also completed a course evaluation questionnaire. In another study, Krystallidou (2014) also reported incorporating a 2-day training course on interpreting in healthcare settings to raise trainee students’ awareness of non-verbal patient-centered communication; specifically the participants’ gaze direction and body orientation. In her training, Krystallidou relied on exposing her trainees to authentic videotaped interpreter-mediated interactions in medical settings, and getting them to role-play the transcript of these interactions. Recently, Maddux (2018) trained American sign language students in source attribution, i.e., identifying the source or initiator of the utterance. She used a mixed-method design by relying on the data drawn from pre- and post-tests, instructor journals, a post-questionnaire, and instructor interviews. In another recent study, Dong et al. (2019) investigated how training can influence students’ consecutive interpreting strategy use. They first identified 22 interpreting strategies through reviewing the literature. Then they compared this strategy framework to the strategies used by 66 students attending an interpreting training programme in two training stages (during the 2nd month and at the end of training in the 10th month). They

20

2 Translator/Interpreter Training Experimentation Research

examined the students’ use of interpreting strategies through assessing their performance on a B-to-A consecutive interpreting test in each stage. The test was also followed by a retrospection session and an interview with each student. As noted, the participants in all the above-reviewed process-training studies are translation and interpreting students rather than professionals. This may be due to the fact that it is difficult to involve professional translators and interpreters in such process awareness-raising interventions. Clearly, translation and interpreting students have a more pressing need to be aware of these processes.

2.2.3 Corpus-Based Training The main theme in corpus-based or text-based training studies is exposing trainees to a group of texts and getting them to observe and discuss particular features in them as an approach to improving their translation or interpreting. The corpora used in this training type are mostly translated and interpreted texts produced by professional translators or interpreters, or—in a very exceptional cases—produced by trainees, i.e., learner corpora. As will be noted in the studies highlighted below, researchers can depend on monolingual corpora or comparable ones contrasting linguistics features in two languages. It will be also noted that the following corpus-based training studies are similar to process-oriented ones in involving only student trainees rather than professionals. Laursen and Pellón (2012) showed how they used comparable corpora and concordance software as an effective tool for teaching specialized Spanish–Danish translation. They first enabled the student participants to access three Spanish specialized corpora after familiarizing them with their use. The participants practiced using the three corpora in the class for revising their own draft translations, and editing others’ translations. The two researchers found that the use of corpora enabled the learners to solve the problems in the draft translations. On the other hand, Kim’s (2007) study is an example of corpus-based research relying on trainees’ translated texts. Kim used the systemic functional linguistic theory-based text analysis as an instructional tool of translation. The tool was used to explain translation errors noted in students’ translations and their types. Kim taught these text analysis-based activities to a number of Korean students and evaluated their training experiences using a questionnaire and learning journals. There is a notable increase in the number of interpreting studies making use of corpus-based training. Some of these studies such as those reported by Sachtleben and Heather (2011), Crezee and Grant (2013), and Yenkimaleki and van Heuven (2018, 2019) targeted raising student interpreters’ awareness of particular linguistic, discoursal, and prosodic features. In their action research study, Sachtleben and Heather (2011) provided a group of interpreting students attending Auckland University of Technology in Auckland, New Zealand, with the naturalistic semi-authentic discourse samples in the classroom to raise their explicit awareness of conversational English pragmatic features. The training lasted for eight weeks in which the

2.2 Main Types of Training Approaches

21

students listened to the recorded discourse samples, and analyzed and discussed the pragmatic features in them (e.g., word stress and intonation, politeness strategies, and purposes of speech acts). The students were also referred to online glossaries of pragmatic terms and features. This study depended on a number of data sources in assessing their training effectiveness, including: students’ bi-weekly reflective blogs, two student surveys, and the teacher’s weekly log. On the other hand, Crezee and Grant (2013) implemented corpus-based training over three 12-week semesters to improve student interpreters’ performance, idiom recognition, and cross-cultural and pragmatic awareness. They depended on a recorded collection of television programmes featuring idiomatic expressions used in real-life and contextualized spoken language. Their interpreting students watched these programmes, and then they practiced paraphrasing the meanings of idiomatic expressions used in them. The study also examined the students’ ability to recognize idiomatic expressions, and their approaches to interpreting idioms. As for Yenkimaleki and van Heuven (2018, 2019), they reported two studies on the effect on training undergraduate students in prosody awareness on their interpreting performance. Two groups of students took part in their first study (Yenkimaleki and van Heuven 2018), where one group received prosody awareness training and the other one was a control group. Yenkimaleki and Van Heuven (2019) investigated the effect of explicit versus implicit prosody training on the quality of interpreter trainees’ consecutive interpreting performance. Explicit prosody was provided to one experimental group whereas another group received implicit instruction. On the other hand, a third group served as the control one and received routine instruction. In the two studies, Yenkimaleki and van Heuven used a pre-/post-test design and assessed their participants’ interpreting performance by rating some features such as accuracy, omission, overall coherence, accentedness, pace, and voice. Corpus-based training has been particularly found beneficial to court and healthcare interpreting students. Hunt-Gómez and Moreno (2015) reported an empirical study on developing reality-based audiovisual materials designed for court interpreters training in Spain. While developing these materials, the two researchers considered interpreting students’ skills and needs. They assessed the suitability of these materials using a questionnaire with 127 students at three Spanish universities. A new corpus-based trend in interpreter training studies is the use of the Conversation Analytic Role-Play Method (CARM) through which trainees are engaged in listening to authentic or real-life interpreter-mediated interactions and then are required at specific points of this listening experience to reflect on the next reaction prior to listening to the next part (Stokoe 2011; Niemants and Stokoe 2017). Dal Fovo (2018) reported a study in which she got her students to observe real-life interactions in a 20-recording corpus in a healthcare interpreting course. Verifying the validity of this method in her study, Dal Fovo concluded that it is a useful training tool for teaching interpreter-mediated healthcare interactions. Another trend in corpus-based training studies is developing translators and interpreters’ skills through enhancing their domain-specific knowledge. This idea is specifically appropriate to the area of technical translation and interpreting. For

22

2 Translator/Interpreter Training Experimentation Research

example, Sharkas (2013) examined the hypothesis that scientific translators’ performance and product accuracy can be enhanced by subject-knowledge reading, i.e., reading articles relevant to source texts. Sharkas involved an experimental group and a control one in her study. Two days prior to completing the translation task, the experimental group students were given an article related to the source text and were instructed to read it in preparation for the task, while control group students had to translate the scientific text while accessing dictionaries—like the experimental group students—but without reading the relevant article prior to performing the task. She assessed the effectiveness of this training idea by using a scale asking the students to rate the difficulty of the task performed in the two conditions, and by examining the addition, omission, and substitution errors of technical terms in their translations. Ilynska et al. (2017) also called for using popular language-for-specific-purpose texts in technical translation classes so as to raise students’ background knowledge and awareness of the expressive resources in the target genre. They experimented their training idea with a group of MA technical translation students in Latvia. Their reported case study provided an instructional model which included text reading, analysis, and translation tasks. A similar study was conducted in interpreting training by Fantinuoli (2018), who tried to improve interpreters’ performance by exposing them to domain-specific corpora, i.e., the texts representing the language of a particular area, given that these corpora are similar to the reference materials interpreters use to prepare themselves for interpreting tasks. Thus, Fantinuoli developed two comparable monolingual corpora for one scientific topic (biogas), and assessed its suitability by asking 30 interpreting students to evaluate 10 texts taken randomly from each corpus in terms of their relevance to the topic, and appropriateness as reference and preparatory materials. With the developments brought by technology in the area of corpus linguistics research, it is expected that an increasing number of translation and interpreting studies will make use of corpus-based training. It is noteworthy that implementing corpus-based training seems to be a more effortful process in interpreting studies. In some of the above interpreting studies (e.g., Sachtleben and Heather 2011), the process of implementing this training type required a number of pre-training steps for preparing corpus samples or materials, including transcribing and recording them. As shown also, some interpreting studies (e.g., Hunt-Gómez and Moreno 2015) assessed the suitability of these corpus-based interpreting materials with samples representing target student populations.

2.2.4 Profession-Oriented Training Profession-oriented training studies aim at raising trainees’ awareness of their future career conditions and requirements. This training type is viewed as a way for empowering students and helping them have a smoother transition from their study to workplaces (Abdallah 2011) and get adapted easily to the professional community (Pan

2.2 Main Types of Training Approaches

23

2016). Two main types of profession-oriented training can be noted in translation and interpreting studies: professional awareness and situated learning training. Not many professional awareness training studies have been conducted. These few studies seem to have been limited to the translator training field. For example, Abdallah (2011) reported a study in which translation professionals have been invited to share their work and career experiences with MA translation students at a Finnish university by talking about the ethical dilemmas encountered in their work, and the ways of solving or coping with them. Abdullah assessed 24 students’ evaluation of and reflections on this instructional experience by examining their learning diaries and 154 of their online forum discussions. This data was organized into a number of categories concerned with some ethical dimensions, including: pricing one’s work, acting as a successful micro-entrepreneur, avoiding moral hazards, surviving in production networks, professional ethics versus the common good and business ethics. On the other hand, Galán-Mañas (2019) used professional portfolios in translator training as a formative and summative assessment tool. The portfolio she used was based on a tutorial action plan of translation and interpreting professionalization and labor market. The students were required to structure their professional portfolios by including (a) a cover letter, (b) CV, (c) samples of their translations, interpreting tasks, and text revisions, and (d) the estimated pricing rates of their translations and interpreting services. They were provided with a rubric for assessing their portfolios. The students’ portfolios were also evaluated using assessment criteria. Other studies have approached the professional dimensions in translator and interpreter training through situated learning. According to González-Davies and Enríquez-Raído (2016), situated learning is: [A] context-dependent approach to translator and interpreter training under which learners are exposed to real-life and/or highly simulated work environments and tasks, both inside and outside the classroom. Under this approach, it is the tasks and real-life professional demands, as well as other contextual factors such as institutional, social, geographical, or community beliefs and customs, rather than a predetermined closed syllabus, that drive curricular design. Ultimately, situated learning seeks to enhance learners’ capacity to think and act like professionals. (p. 1)

The situated learning training experimented in previous studies is either experiential or simulated. The two studies reported by Chouc and Conde (2016) and Olalla-Soler (2019) made use of the experiential situated learning training type. Chouc and Conde (2016) took graduate interpreting students to the Scottish Parliament to observe the real-life work environment of professional interpreters, and to have dummy practice in interpreting booths. Prior to performing the interpreting booth activities, 13 students observed a live session in the Scottish Parliament from the viewing gallery. A few weeks later, these students practiced mute-booth simultaneous interpreting in three live Scottish parliamentary sessions (about 2.5–3 h). Chouc and Conde assessed the impact of this situated learning experience by using questionnaires and in-depth interviews before and after it. Recently, Olalla-Soler (2019) evaluated the experiences a group of translation and interpreting students had as a result of spending one working day with a professional mentor. During this day, each student performed several tasks assigned by their mentor, including translation,

24

2 Translator/Interpreter Training Experimentation Research

interpreting, proofreading, and the other tasks related to work ethics, terminology management, marketing, and using CAT tools. Olalla-Soler evaluated these professional experiences using two questionnaires, one with the students and another with the mentors; both questionnaires focused on on exploring the problems occurring during the mentoring programme. As for the simulated situated learning training, it was used in the interpreting studies reported by Li (2015), Pan (2016), and Chouc and Conde (2018). It was also experimented in a translation training study conducted by Konttinen et al. (2019). One form of this simulated situated learning training is mock conferences which were used in Li’s (2015) study. In the activities described by Li, the students practiced consecutive interpreting in mock conferences where they were organized in groups of four to seven. Li evaluated his 15 student participants’ mock conference experiences using an online questionnaires. Instead of getting the simultaneous interpreting students to work with professionals or learn about their work experiences, Pan (2016) relied on engaging them in simulated simultaneous interpreting activities. The first half of the course was allocated to building the students’ simultaneous interpreting knowledge and skills, whereas they practiced these simulated activities in the second half of it. The students’ learning and experiences were evaluated through the assessment activities used in the course, and their responses to questionnaires and focus group interviews. Chouc and Conde (2018) also examined the benefits of engaging students in simulated relay conference interpreting activities. Relay or indirect interpreting is using a third language to interpret from a language to another (Shleshinger 2010). In this study, the students practiced relay conference interpreting on a weekly basis, and their responses to the training were assessed using questionnaires and in-depth interviews. In the deaf interpreting field, Lai (2018) trained 20 deaf learners to help them acquire the processional interpreting skills and knowledge. The training provided in Lai’s study can generally be regarded as a simulated type. It focused on helping them prepare for interpreting, manage visual communication, and follow ethical standards in their interpreting practices. The impact of this training was assessed using a number of data sources, including: video-recorded activities, simulated interpreting assignments, site visits, and a test in interpreting ethics. Finally, Konttinen et al. (2019) investigated the impact of working in a simulated translation company learning environment on translation students’ workflow conception development. They compared the essays written by the trainees before and after the translation company simulation course. It can be noted that such awareness-raising or short-time profession-oriented training is yet to be explored in the audiovisual translation field. Conversely, this field has been the main focus of some project-based learning studies implementing longer-term training. This is will be explained in the next subsection.

2.2 Main Types of Training Approaches

25

2.2.5 Project-Based Learning Training Though the training based on project-based learning is similar to simulated situated learning training in one way or another, it differs from it in being a more structural approach implemented over a longer time. Project-based learning is simply learning by doing (Markham et al. 2003). Blumenfeld et al. (1991) define project-based learning as a comprehensive teaching approach that depends on engaging students in pursuing solutions to problems by raising questions, discussing ideas, planning and experimenting solutions, collecting data, drawing conclusions and reflecting upon experiences, and communicating and reporting findings to others. A central theme in the project-based learning implemented in some training translation studies is getting students to work in authentic translation projects with real clients (Kiraly 2005; Li et al. 2015). Maruenda-Bataller and José (2016) suggest implementing the project-based learning in translator training by organizing students into groups of five, and assigning a role to each group member; for example, a project manager, a terminology specialist, a translator, and an editor. Li et al. (2015) point out that all versions of project-based learning share some common characteristics, including adopting a long-term learner-centered approach, giving students the opportunity to plan their learning, and getting them to work collaboratively in groups, coordinate their efforts and reflect upon their learning. In the exemplary studies reviewed below, readers can clearly note these common characteristics. Two project-based learning training studies focused on audiovisual translation. Kiraly’s (2005) study is perhaps the earliest published project-based learning one in translator training. In this study, 14 students completed a subtitling project in 16 weeks. The project started with two 90-minute workshop sessions in which he and his students learned the basics of subtitling. The students worked in pairs on preparing the parts of the subtitles. The students’ subtitles were peer-reviewed and were subjected to a full-group review at the end of the project. Though Kiraly did not clearly explain the data source(s) he used in evaluating the students’ experiences, the report of the project indicates he relied on the participant observation. Huertas Barros and Vine (2019) also reported another audiovisual translation study in which 21 undergraduate students worked collaboratively in a transcreation project, i.e., translating advertising materials for audience in an environment with different cultural and/or linguistic characteristics. The transcreation project was designed in collaboration with a group of trainers and freelance translators, a transcreation company, and a copy editor. It was divided into the following six stages: initial analysis of the transcreation project, selecting and analyzing the source product material, conducting a target market analysis, developing a creative brief, transcreating the promotional material, and giving a group presentation covering all the stages of the project completed. Huertas Barros and Vine assessed their participating students’ projectbased experiences using a two-part survey for collecting qualitative and quantitative data types. Project-based learning has also been used in the studies dealing with technical and specialized translation. In the study reported by Galán-Mañas (2011), the students

26

2 Translator/Interpreter Training Experimentation Research

worked on technical translation projects. The documents translated were taken from a foreign multinational robotics company. The project was divided into several stages, and the students were actively involved in planning for each stage and setting out its guidelines. The students’ responses to the project-based learning experiences were gauged through a self-evaluation form, a co-evaluation of group members, coevaluation across groups, teacher evaluation, and the project final reports. In a technical Galician–English translation course, González and Veiga Díaz (2015) involved nine students in translating publishing guidelines of scientific journals. Their aim was to engage the students in a multi-task translation process, which required them to identify the various stages of the project, distribute translation tasks efficiently, and coordinate their work autonomously in a way ensuring the quality of their project product. The nine students completed the project in five weeks. They were organized in groups of three members, and during the project they received peer and teacher feedback. The students’ learning experiences were assessed using the students’ group and individual reports, and two questionnaires; the first dealt with the their documentation strategies and sources used while performing the tasks, and the second focused on the students’ reflections on their learning experiences and the acquired competences from the project. In an English–Persian translation course, Moghaddas and Khoshsaligheh (2019) implemented a project-based translation treatment with 21 university students. The project was implemented in a full two-credit course over 16 sessions. The students were divided into eight groups, who worked on translating three academic texts, a book chapter on computer science, and a tourism brochure. The two researchers collected their data through focus group interviews, participant observations and audio-recordings, a critical thinking scale, and a questionnaire measuring team skills. It is noteworthy that the four studies reviewed above used translation-oriented projects whose purpose is enabling students to acquire professional translation skills. Li et al. (2015) referred to another type which is research-oriented project-based learning. According to them, research-oriented project-based learning aims at helping students acquire non-translational skills, but it indirectly influences their development as future translators. Since Li et al.’s (2015) study has mainly used project-based learning in developing students’ research skills, it is highlighted in the next section which covers this training type.

2.2.6 Research-Oriented Training At present, there is a growing global trend toward using research-based teaching in higher education institutions. This teaching type has been viewed as the main indicator of the quality of university education (Mägi and Beerkens 2016), and an effective approach to deepening students’ active learning (Ozay 2012). A term that has been commonly used when discussing research integration into teaching is the ‘research-teaching nexus’, which refers to making use of discipline-based research to inform course content and develop students’ research knowledge and capacity

2.2 Main Types of Training Approaches

27

(Holbrook & Devonshire 2005; McLean and Barker 2004). With regards to translator and interpreter training, there have been some calls for developing students’ research skills in the two fields (see Pym 2013; Vandepitte 2013). Despite these calls and the growing status of translation and interpreting research in general, a few studies have focused on translator and interpreter research-oriented training. Li et al. (2015) conducted their project-based learning study in an undergraduate translation programme at the University of Macau. Ten groups of students (n = 42) worked on an assignment that required each group to select a topic related to business translation in Macau, and to pursue relevant literature and sources. The students completed the project in five stages: preparing for the project and piloting it, carrying out the research process, reporting the findings, and summarizing them. Each group had to submit a 2000-word report on their project, in this report they were supposed to delineate its purpose and importance, the questions it answered, the data and materials used to answer the questions, the main findings, and how these findings contributed to understanding translation. To evaluate students’ learning, Li and his colleagues relied on students’ reflective journals, a questionnaire, and two focus group interviews. The two studies reported by Risku (2016) and Haro-Soler and Kiraly (2019) addressed translator training at the graduate study level. Risku (2016) conducted an action research study to develop MA students’ research skills by using a seminar course. In this study, the students were first introduced to scientific research in general and to their research project. In the later classes, they gave literature-based presentations on a topic related to the project. This was followed by writing a short empirical research paper; an activity aimed at helping them to acquire relevant knowledge and develop their academic writing and research skills. Risku followed the qualitative approach to assessing the impact of this training experience by using semi-structured interviews with six out of the 20 students who took part in the research project. In their recent study, Haro-Soler and Kiraly (2019) used participatory action research to develop graduate students’ research skills and their self-concept as researchers. Their study involved four students in a seminar research project. Throughout the seminar sessions, the students worked on reviewing, revising, and validating a draft questionnaire about the translator’s self-concept. Haro-Soler and Kiraly collected their study data using focus-group interviews, which focused on evaluating the students’ perceptions of the training provided, their roles and the teachers’ roles, and the potential contribution of these experiences to their research and translation knowledge. As implied, research-oriented training has not been given due attention yet in translator and interpreter education research. For students attending undergraduate translation programmes, literature review projects—as shown in Li et al’s (2015) study— can be appropriate research activities. Graduate translator and interpreter education programmes are particularly a richer environment for this training type. Concurring with Vandepitte’s (2013) view that all research stages need specific research competences, there is a need to make graduate translation and interpreting students aware of the competences required in each research stage; including: selecting the research topic, developing research questions and hypotheses, reviewing literature, selecting and developing research instruments, collecting and analyzing data, reporting the study, and publishing research. Therefore, researchers interested in

28

2 Translator/Interpreter Training Experimentation Research

translator and interpreter research-oriented training can investigate the process of developing graduate students’ awareness of how to complete each of these stages. Since it allows trainers conduct research in their own classrooms and develop deeper understanding of the changes brought about by certain teaching practices (Richards 1996), action research is specifically viewed as an appropriate method for research-oriented training.

2.2.7 Other Miscellaneous Training Types In addition to the above six categories, some studies reported using other miscellaneous translator and interpreter training techniques. Each of these techniques has been used in a limited number of studies—and thus they do not represent a research trend in translator and interpreter training research. Examples of these miscellaneous training techniques include: self-assessment, self-directed learning, theatrical training, and translation speed. In Cho and Roger’s (2010) study, theatrical training was used to develop interpreting students’ empathic responses, non-verbal communication, and organized emotions. The two researchers evaluated the experiment by assessing the students’ interpreting performance and experiences before and after the 7-week training. Self-assessment training is a process that involves getting learners to selfevaluate their performance and have a more autonomous role in their learning so as to identify what they need to improve in future similar translation or interpreting tasks. In order for learners’ self-evaluation of their translated or interpreted texts to be effective, it has to be guided by using an assessment sheet that includes some evaluation criteria (Bartłomiejczyk 2007). Russo’s (1995) study is perhaps the earliest one to introduce self-assessment in interpreting training. It is worth mentioning, however, that Russo did not involve trainees in a standardized reflective self-assessment encompassing using an assessment sheet to guide their selfevaluation reflections. The same approach was also followed by Bartłomiejczyk (2007), who investigated trainee simultaneous interpreters’ self-evaluation of their performance on a number of interpreting tasks but did not find significant effects for the self-assessment training provided. On the other hand, Robinson et al. (2006) attempted to develop students’ scientific and technical translation performance using self-assessment, which involved getting learners to assess their performance and have a more autonomous role in their learning. Seventy-eight students were engaged in a number of self-assessment tasks; each encompassed: completing an individual or group translation task, rating their translated text using a rating scale, writing a brief explanation to justify the scores given, and receiving teacher feedback on their selfassessment. Robinson et al. evaluated students’ learning using different qualitative and quantitative data sources, including whole group and team discussions, student– teacher email messages, reflection activities, course evaluation questionnaire, and the scores given to the translated texts by individual students, student teams, and the teacher.

2.2 Main Types of Training Approaches

29

The other training methods (i.e., self-directed learning and speed training) were used in translation studies only. Zhong (2008) also tried to develop students’ translation performance by using self-directed learning, a learner-centered technique that emphasizes considering student’s learning needs and progress. Over a 14-week translation course, four students performed a translation task on a biweekly basis. Every task was followed by an individual interview for each student with the teacher to discuss their performance and plan for what is to be achieved in the following translation task. Throughout these repeated biweekly cycles of translation task performance, performance evaluation and discussion, and re-planning of learning objectives, Zhong collected qualitative observational data about the students’ responses to the training. In Bowker’s (2016) study, 29 students attending a translation program at the University of Ottawa were trained in speed scientific/technical translation using nine exercises. The students’ translated texts over the one-semester course were analyzed and charted to note their progress, and their learning experiences were surveyed. Another main issue in this miscellaneous category is developing students’ language skills or learning through using translation activities or tools; a trend that may be called ‘translation for language learning’ (Jiménez-Crespo 2017). JiménezCrespo (2017) reviewed the use of translation technologies in Spanish language learning and highlighted the role of machine translation in reading and writing tasks, and the use of translation corpora in foreign language education. Two studies addressed the impact of audiovisual translation on developing students’ language skills. McLoughlin and Lertola (2014) examined the effect of a 24-week subtitling module on developing undergraduate students’ language competence by using an evaluation questionnaire with 40 students. Talaván and Rodríguez-Arancón (2014) also investigated the impact of a collaborative reverse subtitling project on fostering students’ translation and writing skills. They observed their 20 student participants on a weekly basis while subtitling two short clips. Their data sources also included students’ language test scores and questionnaire responses. In addition to these audiovisual translation studies, there are some other ones that tried to develop students’ language proficiency through using traditional translation activities. For example, Ebbert-Hübner and Maas (2017) examined the impact of using contrastive analysis and translation instruction on improving students’ grammatical accuracy. Despite the notable increase in the ‘translation for language learning’ studies, it is worth mentioning that these studies are closer to the language education area than to the translator education one.

2.3 Prescriptive Training Apart from the above trends in translator and interpreter training research, there is some published literature concerned mainly with suggesting and exemplifying particular pedagogical models or frameworks rather than piloting or experimenting them. Such literature can be best described as providing prescriptive training, i.e.,

30

2 Translator/Interpreter Training Experimentation Research

telling translators/interpreters trainers how to train trainees, or informing translators/interpreters how to develop their competences. In the majority of the cases, these models or frameworks are based on relevant research evidence. Reports of this type are similar to the very early papers on translator training (Citroen 1966; Horn 1966; Schmit 1966). Examples of the prescriptive translator training models include those provided by Al-Kufaishi (2006), Hurtado Albir (2015), and Galán-Mañas (2013). Al-Kufaishi (2006) provided a semantically based and pragmatically oriented model of translating expository texts. This model is discourse-centered, and considers the macroand micro-dimensions of discourse analysis. It encompasses an analytical procedure that has three levels (discourse decomposition, communicative context analysis, and cultural restructuring). Galán-Mañas (2013) also proposed a methodological model for training legal translators, and teaching them drawing upon contrastive rhetoric. Galán-Mañas (2013) suggested two contrastive rhetoric-based teaching units organized within a task-based framework. On the other hand, Hurtado Albir (2015) also described a competence-based translator training approach, which is based on the strategic, contrastive, extralinguistic, occupational, instrumental, and translation problem-solving competences. On the other hand, Hurtado Albir’s (2015) paper provided a description of the units based on such training approach, their sequence, and assessment procedures. The teaching techniques and models proposed for interpreter training include the ones described by Choi (2006), Baxter (2014), and Tebble (2014). Choi (2006) proposed teaching interpreting students through a metacognitive evaluation model, which includes the following five stages: self-evaluation and feedback, problemfinding and student profiling, prioritization, practice, and evaluation and monitoring. On the other hand, Baxter (2014) described a simplified two-step model developed for active listening and production in the interpreting process. The model is based on the idea of managing cognitive efforts by mobilizing knowledge in the processing stage, and raising awareness to generate controlled and simplified output in the production stage. Tebble (2014) also described a genre-based approach to teaching dialogue interpreting. The approach depends on engaging trainees in analyzing transcripts of their interpreting role playing. Through this approach, Tebble argued that students will be helped to reflect on their performance, and understand professional accountability. As can be noted, these training models or frameworks overlap with the categories of translator and interpreter training discussed in the previous section; for example, corpus-based (Galán-Mañas 2013; Al-Kufaishi 2006) and process-based training (Baxter 2014). Needless to say, translator and interpreter training literature is full of a large number of prescriptive pedagogical models.

2.4 Conclusion

31

2.4 Conclusion In this chapter, the author has provided a detailed description of the main types and subtypes of translator and interpreter training experimented and proposed in previous research. The methodological approaches, assessment procedures, and data sources in this research have also been highlighted. Table 2.1 provides a summary of these experimented and proposed training types. As noted, the very vast majority of these studies are mainly concerned with training translation and interpreting students rather than professionals. Noted also is the lack of interpreting studies in the process-based, project-based learning, and research-oriented training types. Table 2.1 Overview of the main training types and subtypes experimented/proposed in previous translation and interpreting research Training type

Training subtypes

Technology-based training

• • • •

Blended learning-based training Web-based training Software-based training Machine translation-based training

Process-based training

• • • • • •

Whole translation process training Translation revision training Transediting training Note-taking interpreting training Non-verbal communication interpreting training Source attribution interpreting training

Corpus-based training

• • • •

Comparable corpora translation training Pragmatic awareness interpreting training Prosody interpreting awareness training Domain-specific knowledge translation training

Profession-oriented training

• Professional awareness training • Experiential situated learning training • Simulated situated learning training

Project-based learning training

• Audiovisual translation project-based learning • Technical/specialized translation project-based learning

Research-oriented training

• Collaborative literature review training • Academic writing and research presentation training • Research instrument development training

Miscellaneous training types

• • • • •

Self-assessment training Self-directed learning Theatrical training Translation speed Translation for language learning

Prescriptive training

• • • •

Contrastive rhetoric-based training Competence-based translation training Active listening and interpreting production training Genre-based training

32

2 Translator/Interpreter Training Experimentation Research

Some problematic issues were noted in many of the reports of these training experimentation studies. For example, many research reports have not provided detailed descriptions of research instruments or assessment tools and their psychometric characteristics. Another main weakness in such reports is the overemphasis on describing training material development, but not on delineating training procedures or data collection and analysis. Additionally, a considerable number of these studies explored training effectiveness through assessing trainees’ perceptions rather than the target translation or interpreting performance skills or competences. In fact, this latter dimension is what should matter more in future translator and interpreter training studies. The training research and literature highlighted in the chapter clearly indicate that much is still needed in these translator and interpreter training experimentation studies. Clearly, this research type still lacks rigorous research designs. Compared to its applied linguistics/language teaching counterpart (e.g., the studies published in journals such as Language Learning, Studies in Second Language Acquisition, Applied Linguistics, Modern Language Journal, Journal of Second Language Writing, TESOL Quarterly, and System), translator and interpreter training experimentation research is still evolving. Bringing about major developments in this research type will particularly require experimenting different instructional techniques, using more rigorous designs, conducting studies that provide further evidence of when a particular method works more or less effectively in translator and interpreter training. It can be noted that not much training experimentation research has been conducted in the audiovisual translation and dialogue interpreting fields. Besides, feedback provision experimentation has received no attention yet in translation and interpreting research as compared to its writing and applied linguistics counterpart. In this latter research field, feedback provision has gained much ground since the 1990s. Examples of the much feedback provision writing research can be found in the Journal of Second Language Writing and Writing and Pedagogy. In these two journals, we can find many published studies about the use of various teacher feedback techniques in teaching writing, including direct, indirect, coded, uncoded feedback, along with peer feedback. On the other hand, the above-mentioned applied linguistics and language education journals also publish speaking and writing feedback studies. These published writing and language teaching feedback studies can be regarded as a rich source for advancing feedback provision experimentation research in translator and interpreter training.

References

33

References Abdallah, C. 2011. Towards empowerment: Students’ ethical reflections on translating in production networks. The Interpreter and Translator Trainer 5 (1): 129–154. https://doi.org/10.1080/135 56509.2011.10798815. Angelone, E. 2013. The impact of process protocol self-analysis on errors in the translation product. Translation and Interpreting Studies 8 (2): 253–271. https://doi.org/10.1075/tis.8.2.07ang. Al-Kufaishi, A. 2006. A pedagogic model of translating expository texts. Babel 52 (1): 1–16. https:// doi.org/10.1075/babel.52.1.01alk. Bartłomiejczyk, M. 2007. Interpreting quality as perceived by trainee interpreters: Self-evaluation. The Interpreter and Translator Trainer 1 (2): 247–267. https://doi.org/10.1080/1750399X.2007. 10798760. Baxter, R. 2014. A simplified multi-model approach to preparatory training in simultaneous interpreting. Perspectives: Studies in Translatology 22 (3): 349–372. https://doi.org/10.1080/090 7676x.2012.758751. Blumenfeld, P.C., R.W. Elliot Soloway, J.S. Marx, M.G. Krajcik, and A. Palincsar. 1991. Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist 26 (3–4): 369–398. https://doi.org/10.1080/00461520.1991.9653. Bowker, L. (2016). The need for speed! Experimenting with “Speed Training” in the Scientific/Technical Translation Classroom. Meta 61: 22–36. https://doi.org/10.7202/1038683ar. Burns, A., & A. Westmacott, (2018). Teacher to researcher: reflections on a new action research program for university EFL teachers. Profile: Issues in Teachers´ Professional Development, 20(1): 15–23. Chmiel, A. 2012. How effective is teaching note-taking to trainee interpreters? The Interpreter and Translator Trainer 4 (2): 233–250. https://doi.org/10.1080/13556509.2010.10798805. Cho, J., and P. Roger. 2010. Improving interpreting performance through theatrical training. The Interpreter and Translator Trainer 4 (2): 151–171. https://doi.org/10.1080/13556509.2010.107 98802. Choi, J. 2006. Metacognitive evaluation method in consecutive interpretation for novice learners. Meta: Translators’ Journal 51 (2): 273–283. https://doi.org/10.7202/013256ar. Chouc, F., and J. Conde. 2018. Relay interpreting as a tool for conference interpreting training. International Journal of Interpreter Education 10 (2): 58–72. Chouc, F., and Conde, J. (2016). Enhancing the learning experience of interpreting students outside the classroom. A study of the benefits of situated learning at the Scottish Parliament. The Interpreter and Translator Trainer 10(1): 92–106. https://doi.org/10.1080/1750399x.2016.115 4345. Citroen, I.J. (1966). targets in translator training. Meta 11 (4): 139–144. https://doi.org/10.7202/ 002246ar. Crezee, I., and L. Grant. 2013. Missing The Plot? Idiomatic language in interpreter education. International Journal of Interpreter Education 5: 1–21. Dal Fovo, E. 2018. The use of dialogue interpreting corpora in healthcare interpreter training: Taking stock. The Interpreters’ Newsletter 23 (2018): 83–113. Dong, Y., Y. Li, and N. Zhao. 2019. Acquisition of interpreting strategies by student interpreters. The Interpreter and Translator Trainer 13 (4): 408–425. https://doi.org/10.1080/1750399X.2019.161 7653. Ebbert-Hübner, C. and C. Maas. (2017). Can translation improve EFL students’ grammatical accuracy?. International Journal of English Language & Translation Studies 5(4), 191–202. Fantinuoli, C. 2018. The use of comparable corpora in interpreting practice and training. The Interpreters. Newsletter 23: 133–149. Felix, U. 2008. The unreasonable effectiveness of CALL: What have we learned in two decades of research? ReCALL 20 (2): 141–161. https://doi.org/10.1017/S0958344008000323. Felix, U. 2005. Analyzing recent CALL effectiveness research: Towards a common agenda. Computer Assisted Language Learning 18: 1–32. https://doi.org/10.1080/09588220500132274.

34

2 Translator/Interpreter Training Experimentation Research

Fernández, F. and P. Zabalbeascoa. 2012. Developing trainee translators’ strategic subcompetence through metacognitive questionnaires. Meta: Translators’ Journal 57 (3): 740–762. https://doi. org/10.7202/1017089ar. Galán-Mañas, A. 2019. Professional portfolio in translator training: Professional competence development and assessment. The Interpreter and Translator Trainer 13 (1): 44–63. https://doi.org/ 10.1080/1750399X.2018.1541295. Galán-Mañas, A. 2013. Contrastive rhetoric in teaching how to translate legal texts. Perspectives: Studies in Translatology, 21 (3): 311–328. https://doi.org/10.1080/0907676x.2011.641982. Galán-Mañas, A. 2011. Translating authentic technical documents in specialised translation classes. The Journal of Specialised Translation 16: 109–125. Galán-Mañas, A., and A. Hurtado Albir. 2015. Competence assessment procedures in translator training. The Interpreter and Translator Trainer 9 (1): 63–82. https://doi.org/10.1080/1750399X. 2015.1010358. Galán-Mañas, A., and A. Hurtado Albir. 2010. Blended learning in translator training: Methodology and results of an empirical validation. The Interpreter and Translator Trainer 4 (2): 197–231. https://doi.org/10.1080/13556509.2010.10798804. González-Davies, M., and V. Enríquez-Raído. 2016. Situated learning in translator and interpreter training: Bridging research and good practice. The Interpreter and Translator Trainer 10 (1): 1–11. https://doi.org/10.1080/1750399X.2016.1154339. González, M., and M. Veiga Díaz. 2015. Guided inquiry and project-based learning in the field of specialised translation: A description of two learning experiences. Perspectives: Studies in Translatology 23 (1): 107–123. https://doi.org/10.1080/0907676x.2014.948018. Hansen, I., and M. Shlesinger. 2007. The silver lining: Technology and self-study in the interpreting classroom. Interpreting 9 (1): 95–118. Haro-Soler, M.D.M., and D. Kiraly. 2019. Exploring self-efficacy beliefs in symbiotic collaboration with students: An action research project. The Interpreter and TranslatorTrainer 13 (3): 255–270. https://doi.org/10.1080/1750399X.2019.1656405. Hatim, B. 2013. Teaching and researching translation. New York: Routledge. Holbrook, N.J., and E. Devonshire. 2005. Simulating scientific thinking online: An example of research-led teaching. Higher Education Research & Development 24 (3): 201–213. Horn, S.F. 1966. A college curriculum for the training of translators and interpreters in the USA. Meta 11 (4): 147–154. https://doi.org/10.7202/003130ar. Hurtado Albir, A. 2015. The acquisition of translation competence: Competences, tasks, and assessment in translator training. Meta: Translators’ Journal 6 (2): 256–280. https://doi.org/10.7202/ 1032857ar. Huertas Barros, E., and J. Vine. 2019. Training the trainers in embedding assessment literacy into module design: A case study of a collaborative transcreation project. The Interpreter and Translator Trainer 13 (3): 271–291. https://doi.org/10.1080/1750399X.2019.1658958. Hunt-Gómez, C., and P. Moreno. 2015. Reality-Based court interpreting didactic material using new technologies. The Interpreter and Translator Trainer 9 (2): 188–204. https://doi.org/10.1080/175 0399X.2015.1051770. Ilynska, L., T. Smirnova, and M. Platonova. 2017. Application of LSP texts in translator training. Studies in second language learning and teaching 2: 275–293. https://doi.org/10.14746/ssllt. 2017.7.2.6. Jiménez-Crespo, M.A. 2017. The role of translation technologies in Spanish language learning. Journal of Spanish Language Teaching 4 (2): 181–193. https://doi.org/10.1080/23247797.2017. 1408949. Kemmis, S., and R. McTaggart. 1988. The action research planner, 3rd ed. Geelong: Deakin University. Kim, M. 2007. Using systemic functional text analysis for translator education: An illustration with a focus on textual meaning. The Interpreter and Translator Trainer 1 (2): 223–246. https://doi. org/10.1080/1750399X.2007.10798759.

References

35

Kiraly, D.C. 2005. Project-Based learning: A case for situated translation. Meta 50 (4): 1098–1111. https://doi.org/10.7202/012063ar. Ko, L. 2008. Teaching interpreting by distance mode: An empirical study. Meta: Translators’ Journal 53 (4): 814–840. https://doi.org/10.7202/019649ar. Ko, L., and N. Chen. 2011. Online-interpreting in synchronous cyber classrooms. Babel 57 (2): 123–143. https://doi.org/10.1075/babel.57.2.01ko. Konttinen, K., O. Veivo, and P. Salo. 2019. Translation students’ conceptions of translation workflow in a simulated translation company environment. The Interpreter and Translator Trainer. https:// doi.org/10.1080/1750399X.2019.1619218. Krystallidou, D. 2014. Gaze and body orientation as an apparatus for patient inclusion into/exclusion from a patient-centred framework of communication. The Interpreter and Translator Trainer 8 (3): 399–417. https://doi.org/10.1080/1750399X.2014.972033. Lai, M. 2018. Training deaf learners to become interpreters: A pilot project. International Journal of Interpreter Education 10 (1): 30–45. Latorraca, R. 2018. Think aloud as a tool for implementing observational learning in the translation class. Perspectives 26 (5): 708–724. https://doi.org/10.1080/0907676X.2017.1407804. Laursen, A., and I. Pellón. 2012. Text Corpora in Translator Training: A Case Study of the Use of Comparable Corpora in Classroom Teaching. The Interpreter and Translator Trainer 6 (1): 45–70. https://doi.org/10.1080/13556509.2012.10798829. Lee, Y., and X. Sun. 2018. Google translate in the translation classroom: A perspective from register theory. Paper presented at the Conference on Google Translate & Modern Languages Education, University of Nottingham, 29 June 2018. Li, D., C. Zhang, and Y. He. 2015. Project-based learning in teaching translation: Students perceptions. The Interpreter and Translator Trainer 9 (1): 1–19. https://doi.org/10.1080/1750399X. 2015.1010357. Li, X. 2015. Mock conference as a situated learning activity in interpreter training: A case study of its design and effect as perceived by trainee interpreters. The Interpreter and Translator Trainer 9 (3): 323–341. https://doi.org/10.1080/1750399X.2015.1100399. López-Garcia, V., and P. Rodríguez-Inés. 2019. Learning corpus linguistics tools and techniques to cope with the current challenges of audiovisual translation. The Interpreter and Translator Trainer 13 (3): 307–325. https://doi.org/10.1080/1750399X.2019.1656409. Maddux, L. 2018. Source attribution in ASL-English interpreter education: Testing a method. International Journal of Interpreter Education 10 (2): 27–42. Mägi, E., and M. Beerkens. 2016. Linking research and teaching: Are research-active staff members different teachers? Higher Education 72: 241–258. https://doi.org/10.1007/s10734-015-9951-1. Markham, T., J. Larmer, and J. Ravitz. 2003. Project-Based learning: A guide to standards-focused project based learning for middle and high School Teachers. Novato, CA: Buck Institute for Education (BIE). Maruenda-Bataller, S., and S.-R. José. 2016. Project-Based learning and competence assessment in translation training. In Technology implementation in second language teaching and translation studies, 207–228. Springer: Singapore. McLean, M., and H. Barker. 2004. Students making progress and the ‘research-teaching nexus’ debate. Teaching in Higher Education 9 (4): 407–419. https://doi.org/10.1080/135625104200 0252354. McLoughlin, L., and J. Lertola. 2014. Audiovisual translation in second language acquisition: Integrating subtitling in the foreign-language curriculum. The Interpreter and Translator Trainer 8 (1): 70–83. https://doi.org/10.1080/1750399X.2014.908558. Moghaddas, M., and M. Khoshsaligheh. 2019. Implementing project-based learning in a Persian translation class: A mixed-methods study. The Interpreter and Translator Trainer 13 (2): 190–209. https://doi.org/10.1080/1750399X.2018.1564542. Moreno, M., R. Otero-Sabogal, and C. Soto. 2011. Using web-based training to improve skills among bilingual dual-role staff interpreters. International Journal of Interpreter Education 3: 28–48.

36

2 Translator/Interpreter Training Experimentation Research

Niemants N./Stokoe E. 2017. Using the conversation analytic role-play meth-od in healthcare interpreter education. In Teaching dialogue interpreting, ed. L. Cirillo/N. Niemants, Research-based Proposals for Higher Education, 293–322, Amsterdam/Philadelphia: John Benjamins. Norberg, U. 2014. Fostering self-reflection in translation students: The value of guided commentaries. Translation and Interpreting Studies 9 (1): 150–164. https://doi.org/10.1075/tis.9.1. 08nor. Olalla-Soler, C. 2019. Bridging the gap between translation and interpreting students and freelance professionals: The mentoring programme of the Professional Association of Translators and Interpreters of Catalonia. The Interpreter and Translator Trainer 13 (1): 64–85. https://doi.org/ 10.1080/1750399X.2018.1540741. Ozay, S.B. 2012. The dimensions of research in undergraduate learning. Teaching in Higher Education 17 (4): 453–464. https://doi.org/10.1080/13562517.2011.641009. Pan, Y. 2016. Linking classroom exercises to real-life practice: A case of situated simultaneous interpreting learning. The Interpreter and Translator Trainer 10 (1): 107–132. https://doi.org/10. 1080/1750399X.2016.1154346. Pym, A. 2013. Research skills in translation studies: What we need training in. Across Languages and Cultures 14 (1): 1–14. https://doi.org/10.1556/Acr.14.2013.1.1. Richards, J.C. 1996. Reflective teaching in second language classrooms. Cambridge: Cambridge University Press. Risku, H. 2016. Situated learning in translation research training: Academic research as a reflection of practice. The Interpreter and Translator Trainer 10 (1): 12–28. https://doi.org/10.1080/175 0399X.2016.1154340. Robert, I.S., J.J. Jim, A. Ureel, A. Remael, and A.R. Terryn. 2018. Conceptualizing translation revision competence: A pilot study on the ‘fairness and tolerance’ attitudinal component. Perspectives 26 (1): 2–23. https://doi.org/10.1080/0907676X.2017.1330894. Robinson, B., C. Rodríguez, and M. Sánchez. 2006. Self-assessment in translator training. Perspectives: Studies in Translatology 14 (2): 115–138. https://doi.org/10.1080/090767606086 69025. Rodríguez-Castro, M. 2018. An integrated curricular design for computer-assisted translation tools: Developing technical expertise. The Interpreter and Translator Trainer 12 (4): 355–374. https:// doi.org/10.1080/1750399X.2018.1502007. Roush, D. 2010. Universal design in technology used in interpreter education. International Journal of Interpreter Education 2. Russo, M. 1995. Self–evaluation: the awareness of one’s own difficulties as a training tool for simultaneous interpretation. Interpreter’s Newsletter 6: 75–84. Sachtleben, A., and D. Heather. 2011. The teaching of pragmatics as interpreter training. International Journal of Interpreter Education 3: 4–15. Schmit, C. 1966. The self-taught translator: from rank amateur to respected professional. Meta 11 (4): 123–126. https://doi.org/10.7202/004045ar. Schrijver, I., L. Van Vaerenbergh, M. Leijten, and L. Van Waes. 2016. The impact of writing training on transediting in translation, analyzed from a product and process perspective. Perspectives 24 (2): 218–234. https://doi.org/10.1080/0907676X.2015.1040034. Sharkas, H. 2013. The effectiveness of targeted subject knowledge in the teaching of scientific translation. The Interpreter and Translator Trainer 7 (1): 51–70. https://doi.org/10.1080/135 56509.2013.10798843. Shleshinger, M. (2010). Relay interpreting. In Handbook of translation studies, ed. Y. Gambier, and L. van Doorslaer, 276–278. Amsterdam, the Netherlands: John Benjamins. Stokoe, E. 2011. Simulated interaction and communication skills training: The ‘Conversation Analytic Role-play Method’. In Applied Conversation Analysis, ed. C. Antaki, 119–139. New York et al.: Palgrave Macmillan. Talaván, N., and P. Rodríguez-Arancón. 2014. The use of reverse subtitling as an online collaborative language learning tool. The Interpreter and Translator Trainer 8 (1): 84–101. https://doi.org/10. 1080/1750399X.2014.908559.

References

37

Tebble, H. 2014. A genre-based approach to teaching dialogue interpreting: The medical consultation. The Interpreter and Translator Trainer 8 (3): 418–436. https://doi.org/10.1080/1750399X. 2014.972651. Vandepitte, S. 2013. Research competences in translation studies. Babel 59 (2): 125–148. https:// doi.org/10.1075/babel.59.2. Yenkimaleki, M., and V.J. van Heuven. 2018. The effect of teaching prosody awareness on interpreting performance: An experimental study of consecutive interpreting from English into Farsi. Perspectives 26 (1): 84–99. https://doi.org/10.1080/0907676X.2017.1315824. Yenkimaleki, M., and V.J. Van Heuven. 2019. Prosody instruction for interpreter trainees: Does methodology make a difference? An experimental study. Across Languages and Cultures 20 (2): 115–131. https://doi.org/10.1556/084.2019.20.1.6. Zhong, Y. 2008. Teaching Translators through self-directed learning: Documenting the implementation of and perceptions about self-directed learning in a translation course. The Interpreter and Translator Trainer 2 (2): 203–220. https://doi.org/10.1080/1750399X.2008.10798774.

Chapter 3

Translation/Interpreting Learning and Teaching Practices Research

Abstract This chapter provides an overview of the research evaluating translation/interpreting learning and teaching practices. The chapter starts with highlighting the methodological paradigms and data sources commonly used in this research type. The author reviews and discusses the following six areas of translation/interpreting learning and teaching practices research: country-specific translator and interpreter education policies, training programme evaluation (i.e., evaluating one or more programmes, or a programme component), trainees’ needs analysis, trainees’ performance variables (i.e., personal, motivational and academic correlates), classroom practices (e.g., feedback provision and classroom motivation), and trainer education-related issues. At the end of the chapter, the author presents some suggestions for advancing translation/interpreting learning and teaching practices research. Keywords Translation research · Programme evaluation · Translation learning · Needs analysis · Translation teaching · Interpreting teaching · Translation classroom practices

3.1 Introduction: Methodological Approaches Translation/interpreting learning and teaching practices research is characterized by its evaluative nature. The evaluated issues covered in this research type are not limited to the curriculum or training programme delivery but they also include other dimensions such as learning or teaching experiences. Arguably, the larger number of translator and interpreter education studies falls in this research area. In this chapter, the author discusses translation/interpreting learning and teaching practices research areas, and refers to some studies representing each. The author also highlights examples of the data sources and research designs used in each research area.

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_3

39

40

3 Translation/Interpreting Learning and Teaching Practices Research

As will be noted in the following sections, the translation/interpreting learning and teaching practices research conducted so far has been organized into the following six categories: – – – – – –

Translator and interpreter education policy reports, Translation and interpreting programme evaluation, Needs analysis, Learner performance variables, Classroom practices, and Trainer education.

Though each of these research areas depends on using specific research designs, they share some common methodological characteristics. The most dominant characteristic is the use of the mixed-method approach, which combines both quantitative and qualitative data sources. This characteristic is noted in all these areas with the exception of the education policy and learner performance variables ones in which the observational and quantitative approaches are more dominant, respectively. Many of these studies have also depended on data triangulation which entails using quantitative and qualitative data sources. Another common characteristic is involving a much larger number of participants compared to the other research areas. As will be noted, more than 500 participants took part in some studies (e.g., Pan and Yan 2012; Williamson 2016). This can be attributed to the surveying nature of these studies. Research methods and data sources vary a lot in translation/interpreting learning and teaching practices studies. Several methodological paradigms have been used, including: the mixed-method, quantitative, qualitative, observational, and causalcomparative ones. As for data sources, these include: questionnaires, structured, semi-structured and focus group interviews, psychological scales, language tests, and analysis of curriculum and other related documents (e.g. course specifications, teachers’ profiles, degree guidelines, etc.). As will be noted, some reviewed studies (e.g., Schnell and Rodríguez 2017) have relied on preliminary interview data as a source for developing quantitative data collection tools such as questionnaires. On the other hand, statistical analyses were implemented to examine the psychometric characteristics (e.g., reliability and factor analysis) of the psychological scales used in the studies addressing learner performance variables. In the following sections, the readers will note the methodological paradigms and data sources summarized, and how they have been used in each research area.

3.2 Translator and Interpreter Education Policy Reports Translator and interpreter education policy reports are concerned with describing the status quo of training programmes and policies in a particular country. Thus, these reports are characterized by their macro approach to describing translator and

3.2 Translator and Interpreter Education Policy Reports

41

interpreter education in a particular country. Education policy reports mainly provide qualitative descriptions of translator and interpreter education or some dimensions related to it in the target context. Some of these reports also include a brief and non-standardized presentation of the data obtained from interviewing stakeholders about relevant issues. The works provided by Lim (2006) and Salmi and Kinnunen (2015) are two examples of translator education policy reports. Lim (2006) provided an overview of interpreter and translator education in Korean graduate institutions by comparing four graduate schools of interpretation and translation in Korea. In her report, Lim described the courses taught in these programmes, and the graduation exams used, and gave an evaluation of the curricula offered in them. At the end of her report, Lim suggested some guidelines for improving the target curricula. On the other hand, Salmi and Kinnunen (2015) reviewed the training of translators for authorization at Finnish universities. In their report, they relied on their experiences in and observations of teaching authorized translation at the university level in Finland. They first described the training for authorized translation at Finnish universities in general. Additionally, they reviewed the expected learning outcomes, assessment practices in these programmes. The two researchers also provided a brief description of how the target training is implemented at two Finish universities and referred to their evaluative discussions with some trainees. Finally, they evaluated the pros and cons of the reviewed translator training and the authorization system followed. Some researchers have provided overviews of the interpreter education policies in some countries such as Italy, Sweden, and Australia. Tomassini (2012) reviewed the training of healthcare interpreters in Italy and their accreditation and recruitment rules. She focused specifically on the programmes offered in two regions (Emilia Romagna and Marche); these programmes aimed at providing medical interpreting students with hands-on experience and practical training in response to the institutional needs. In her report, Tomassini delineated the specific training provided and its inclusion of on-the-job experience, and discussed the students’ feedback. On the other hand, Gustafsson et al. (2013) reviewed community interpreting training in Sweden. They traced the history of training community interpreters in the country and highlighted the state-level national consolidated training programme established in the 2000s, and the improvements introduced to it. Gustafsson et al. described the structure of the programme, its admission system and modules, characteristics of the trainees, the languages it covers, and the number of its graduates and completed rounds. In light of their observations and discussions with a group of trainees, Gustafsson and colleagues evaluated the programme and referred to the future changes that needed to be made to it. Recently, Stern and Liu (2019) surveyed the legal interpreter training offered by two types of Australian academic and vocational institutions. In profiling formal legal interpreter training in Australia, the two researchers combined Internet-based research with semi-structured telephone interviews conducted with course convenors and managers of the training programmes and courses. They also analyzed available course descriptions, aims, and outcomes. Their report provided descriptions and discussions of the issues related to the curriculum and pedagogy of legal interpreter training in the two types of institutions, including admission

42

3 Translation/Interpreting Learning and Teaching Practices Research

requirements, courses, aims and outcomes, content and settings, interpreting skills and modes, teaching strategies and delivery methods, the languages offered and language-specific tutorials, and assessment methods. As can be noted, no standardized research method has been followed in the abovereviewed translator and interpreter education policy reports. However, the importance of these reports lies in enabling researchers and education policy-makers to compare the training experiences and policies in worldwide contexts. International translation and interpreting journals can advance this policy research by allocating a section in their issues for publishing this type of reports. The quality of these reports can also be fostered through including some survey data of a larger scope.

3.3 Translation and Interpreting Programme Evaluation Programme evaluation in translation and interpreting research is a relatively recent phenomenon compared to its counterpart research in language teaching. Arguably, translation and interpreting programme evaluation research has been influenced by the earlier developments in the language teaching field in particular. Early historical developments of research on language programme evaluation can be found in the works of Lynch (1996), Rea-Dickins and Germaine (1998) and Kiely and ReaDickins (2005) and in the other early relevant research reports published in the 1980s and 1990s in the international journals of language education. Educational programme evaluation is generally the process of determining the success of a study programme or curriculum in achieving its pedagogical goals. Brown (1989) defines educational programme evaluation as ‘the systematic collection and analysis of all relevant information necessary to promote the improvement of a curriculum, and assess its effectiveness and efficiency, as well as the participants’ attitudes within the context of the particular institutions involved’ (1989, p. 223). ReaDickins and Germaine (1998) also view programme evaluation as ‘multifaceted, with the potential to make judgments and recommend, to evaluate effectiveness and efficiency and to contribute to curriculum improvement and development’ (p. 12). With regards to language programme evaluation, Richards and Schmidt (2010) state that: In language programme evaluation, evaluation is related to decisions about the quality of the programme itself and decisions about individuals in the programmes. The evaluation of programmes may involve the study of curriculum, objectives, materials, and tests or grading systems. The evaluation of individuals involves decisions about entrance to programmes, placement, progress, and achievement. In evaluating both programmes and individuals, tests and other measures are frequently used. (p. 206)

Drawing on the above definitions, translation/interpreting education programme evaluation can be defined as a multifaceted process that involves identifying the success of a particular translation/interpreting programme in achieving its desired pedagogical goals, and aims at improving the teaching processes and learning outcomes. This evaluation process can be accomplished through exploring the views

3.3 Translation and Interpreting Programme Evaluation

43

and perceptions of stakeholders (i.e., trainees, trainers, programme managers, graduates, professionals, etc.), and examining the curricula and teaching materials, and applicant and trainee assessment methods and outcomes. The published translation/interpreting programme evaluation studies have followed different approaches. Some of these studies have adopted a macro approach by focusing on a number of translator/interpreter education programmes in one country or context. For example, Mahasneh (2013) evaluated the translation programmes at the Master’s level in Jordan. She specifically focused on the translation of Master’s programmes at three Jordanian universities. Her study dealt with the translation curricula at these universities, the translation teaching approaches and models, and the problematic areas in the training programmes. In her data collection, Mahasneh relied on a student questionnaire and instructor interviews. The student questionnaire included questions about their perceptions of the training course materials and resources, teaching methods, assignments, and assessment, their preferred translation genres. As for the instructor interviews, they focused on the effectiveness of the programme implemented. In addition to these two data sources, Mahasneh reviewed and analyzed the following relevant written documents: university official web sites, booklets, advertisement handouts and brochures, instructors’ CVs, specifications and curricula of the courses taught, and degree guidelines. In the Australian context, Mo and Hale (2014) surveyed the effectiveness of a number of translator and interpreter training programmes from the students’ perspective. They tried to examine the curricular differences or similarities among these programmes and how the students perceive their effectiveness. To assess these issues, Mo and Hale (2014) drew on a mixed-methods research design by combining the following qualitative and quantitative data sources: curriculum analysis, online survey, and follow-up interviews. Solová (2015) also surveyed the training of sworn or certified translators in Poland. She used a questionnaire that was completed electronically by 431 professional sworn translators. Her questionnaire focused on training issues such as: how they acquired their competences, the text types they practiced translating in training opportunities, the place of interpreting in their training curricula, their views on the usefulness of knowledge acquired from training, and the perceived importance of the sworn translator/interpreter courses they studied. Some programme evaluation studies have focused only on one education programme. For example, Gabr (2000) investigated the undergraduate Egyptian students’ perceptions of their translation programme, their understanding of the translation concept and process, and their satisfaction with the course materials and teaching methods used, and teachers’ roles. Gabr collected data about these dimensions using a questionnaire. Meanwhile, he used interviews with three teachers to explore their academic and professional backgrounds, views on the steps needed for developing an appropriate translation curriculum, and their perceptions of the efficient translation teacher. Another group of programme evaluation studies has been concerned with evaluating a particular course. Some researchers evaluated training courses by depending mainly on their own reflections and observations rather than collecting data from stakeholders. For example, Dorado and Orero (2007) reported an attempt to teach

44

3 Translation/Interpreting Learning and Teaching Practices Research

audiovisual translation through an online course. They explained how they collaborated with their colleagues to design and implement the online audiovisual translation course. They also discussed the problems encountered in the different stages of the course. In another course evaluation study, Hale and Ozolins (2014) evaluated a short, non-language-specific interpreting course aimed at qualifying a group of women to become community interpreters. They described the interpreting course content, the teaching methods used in it, the way the course outcome was evaluated, and the evaluation of the course itself. Schjoldager et al. (2008) also reported a study on how they developed and piloted a translation module on precis-writing, revision, and editing. The module was developed by the instructors based on assessing the students’ needs and surveying the views of translation industry representatives. In their evaluation of the course, Schjoldager and colleagues depended on evaluation feedback gained from the kick-off seminar, e-learning session assignments, and exams. The final group of studies has adapted a micro-approach by evaluating a particular dimension in the programme rather than the whole programme. These dimensions include the technological competences covered in training programmes, the ability development resulting from the education programme, the professional experiences students were exposed to, and curriculum evaluation. For example, Rothwell and Svoboda (2019) reported a research project that aimed at surveying the technological competences covered by European postgraduate translator training programmes, and how these competences are integrated into such programmes. Their survey was designed to collect data about the demographics of the programmes surveyed, the general philosophy of technologies training within them, the translation tools most widely taught, the teaching and assessment strategies used, and the material conditions of tools training delivery. The study reported by Wu et al. (2019b) is an example of the research investigating the impact of a training programme on students’ perceived ability development. They investigated how students’ perceived translation ability, and translation teaching and research self-efficacy beliefs developed during their study of an MA translation education programme. Wu and colleagues relied on a qualitative approach to collecting the data by conducting semi-structured and focus group interviews with the students. Two studies addressed the professional experiences translation and interpreting students are exposed to during their study. Liu (2017) depended on the mixed-method research approach in exploring translation graduates’ internship learning experinces, and their perceptions and expectations. She specifically focused on the potential discrepancy between classroom learning and workplace practices, and how trainee students cope with workplace communication problems. Jaccomard (2018) also evaluated the professional placement experiences as perceived by five Masters students at the University of Western Australia. She described these experiences by drawing on the students’ reflective reports and the evaluation reports submitted by their hosts. Translation and interpreting teaching materials evaluation has also received some research attention. For example, Ordóñez-López (2015) investigated how basic legal knowledge is covered in the topics and teaching materials taught in the legal translation undergraduate modules delivered at Spanish universities. This issue was

3.3 Translation and Interpreting Programme Evaluation

45

addressed by examining the contents, bibliographies and materials of the legal translation modules taught in some Spanish universities. In a study covering a wider scope, Tao (2019) also surveyed the translation and interpreting textbooks used in China mainland in 70 years. It was found that these textbooks can be grouped into the following four categories: language-based textbooks, skill-based textbooks, translation-competence-based textbooks, and translator-competence-based textbooks. On the other hand, the two studies reported by Li (2019) and Luo and Ma (2019) addressed interpreting teaching materials evaluation. Li (2019) analyzed how 32 textbooks pedagogically cover business interpreting competences. This study developed two frameworks of business interpreting expertise and pedagogical expertise. Li discussed how the inadequate coverage of these business interpreting competences in the textbooks may negatively influence students’ preparation. Luo and Ma (2019) also evaluated four consecutive interpreting note-taking textbooks published in China. They focused on comparing and analyzing their advantages and disadvantages with regards to the theoretical elaboration of note-taking skills, and the choice of note-taking practice materials.

3.4 Needs Analysis A central research issue in programme evaluation is what has been known as ‘needs analysis’ or ‘needs assessment’. Early works on needs analysis date back to the late 1970s and early 1980s, and these mainly belong to language education literature (Chambers 1980; Munby 1978). While needs analysis received increasing attention by applied linguists and language education researchers in 1980s and 1990s (e.g., Hutchinson and Waters 1987; McDonough 1984; West 1994), relevant published works in the translator and interpreter education area occurred at a later stage (e.g., Li 2000, 2002). Needs analysis is generally viewed as the process of identifying learners’ skill needs and knowledge gaps and developing the target language curriculum or programme based on these needs and gaps. A detailed definition of needs analysis is offered by Hyland (2006) who states that: Needs analysis refers to the techniques for collecting and assessing information relevant to course design: it is the means of establishing the how and what of a course. It is a continuous process, since we modify our teaching as we come to learn more about our students, and in this way it actually shades into evaluation- the means of establishing the effectiveness of a course. Needs is actually an umbrella term that embraces many aspects, incorporating learners’ goals and backgrounds, their language proficiencies, their reasons for taking the course, their teaching and learning preferences, and the situations they will need to communicate in. Needs can involve what learners know, don’t know or want to know, and can be collected and analyzed in a variety of ways. (p. 74)

According to Richards and Schmidt (2010), when conducting needs analysis we make use of both subjective and objective data sources (e.g., questionnaires, interviews, observation, and tests) to identify how and for what purpose language

46

3 Translation/Interpreting Learning and Teaching Practices Research

will be used, and what competence level is required. Brindley (1984) identified two types of language learning needs: (a) objective needs assessed by factual information about learners and their language competences and difficulties; and (b) subjective needs derived from learners’ cognitive and affective characteristics, and their learning wants and expectations. Likewise, Berwick (1989) differentiated between the inductive and deductive approaches to collecting data sources in needs analysis. The inductive data sources include expert reflections, observation, and unstructured interviews, whereas the deductive ones encompass instruments such as structured interviews, questionnaires, and performance measures or tests (Long 2005). According to Richards (2001), need analysis serves a number of purposes, including exploring target students’ characteristics and prior learning experiences, determining the skills and competences students need to acquire and master, and identifying whether the current course or programme meets their needs of potential students. With regards to needs analysis in translation studies, Li (2000)—who is regarded as a pioneer researcher in this area—defines it as a ‘decision-making process of ordering and prioritization of translation learners’ needs when they are clearly defined, thus influencing program innovation, curriculum design, materials selection, and teaching approaches’ (p. 290). Thus, needs analysis is an integral part of translator education as it assists in creating a learner-centered environment method, and can be regarded as a guiding force for bringing about changes in the education process as it enables programme administrators and teachers to develop curricula, materials, and teaching and assessment methods (Li 2000). For more detailed information on how to conduct needs analysis in the field of translation, see Li (2012). The needs analysis studies conducted in the translator and interpreter education field so far are generally few. Some of these studies have followed a macro approach by focusing on a translation programme or course as a whole, while others have adopted a micro one by focusing on a specific type of translation and/or interpreting students’ needs. As will be noted below, the stakeholders taking part in these studies include: trainees or current students, graduated students, trainers, subject-area instructors, professionals, and employers. Some few studies drew upon analyzing documents. It can be noted that early translation needs analysis studies adopted a learnercentered perspective in assessing trainees’ needs. In some cases, these studies collected data from professionals along with trainees. For example, Li (2002) investigated the learning needs of a group of students attending a translation programme at the Chinese University of Hong Kong. He first conducted semi-structured interviews with a number of students to generate the items of a questionnaire. Then the questionnaire developed was completed by 70 undergraduate students, who were studying a number of translation and interpreting courses in the target programme. The questionnaire covered a number of issues, including: students’ reasons for studying translation, their attitudes toward becoming a translator/interpreter, their expectations of learning in a translation programme, their perceived Chinese versus English translation competence, strategies for enhancing their language proficiency, rating of the courses taught, and suggested measures to improve the programme. The questionnaire data collection stage was followed by conducting in-depth interviews with

3.4 Needs Analysis

47

10 students. Another early study was reported by Mutlu (2004), who tried to identify the business English–Turkish translation needs at a Turkish university through surveying the perceptions and views of the students, course and subject-area instructors, graduates, and business administration professionals. Mutlu’s study depended on qualitative and quantitative data sources through using a needs analysis questionnaire with 53 students and structured interviews with the six course instructors, 16 subject-area instructors, 10 graduates, and 10 professionals. In the Iranian context, Khoshsaligheh et al. (2019) used a questionnaire to explore the differences between undergraduate and graduated students’ views on the translation curriculum taught at Iranian universities. On the other hand, not many studies seem to have addressed trainees’ interpreting needs. One of the few studies tackling this issue was reported by Li and Lu (2011), who used learner-centered needs analysis as a basis for developing an optional interpreting course. The two researchers developed a structured questionnaire following Hutchinson and Waters’s (2002) learning needs framework. The questionnaire, which was completed by 104 freshmen and 52 junior students, included questions about students’ motivation for taking an interpreting course, their desired goals from studying the course, and their views on course entry requirements, and appropriate teaching content and materials. In addition to this questionnaire, followup interviews were conducted with eight of the questionnaire respondents. In another study, Williamson (2016) assessed deaf-parented interpreters’ specific areas of skill weaknesses and their views on the training services provided to them. She used a 121item need analysis questionnaire that was completed electronically by 751 eligible respondents. The questionnaire included a variety of Likert-scaled statements, multiple-choice and attitudinal rating scale items, and open-ended questions, and it aimed at identifying the deaf-parented interpreters’ demographic characteristics and their induction strategies for becoming American sign language–English interpreters. Some translation/interpreting studies have followed a micro approach to assessing translation/interpreting students’ needs from learner- and trainer-centered angles. Students’ information literacy, research, and second language literacy needs are among the issues explored with this micro research approach. In two reports, Pinto and Sales (2007, 2008) attempted to identify the information literacy needs of translation trainees as perceived by students and trainers, respectively. The two researchers provided the results obtained from a student questionnaire (Pinto and Sales 2007) and a trainer questionnaire (Pinto and Sales 2008). In light of the two questionnaires used in their project, they define translator information literacy as the translator’s ability to analyze and synthesize, summarize, and document information, use technological tools and lexical sources, communicate effectively in consulting experts, and have efficient and independent decision-making skills. Pym (2013) tried to identify the research needs of translation doctoral students at a Spanish university through analyzing the evaluative comments given by the examiners in their evaluation reports of the students’ research, and observing video recordings of the defense sessions. By analyzing the corpus of the evaluative comments gathered over a period of 10 years, Pym was able to come up with a list of the missing research skills students need to be trained in. In the Saudi context, Ben Salamh (2012) explored the

48

3 Translation/Interpreting Learning and Teaching Practices Research

second language literacy needs of translation students at one university. To address this issue, he depended on analyzing documents (translation job descriptions and job announcements), and surveying the views of students, graduates (professional translators), and faculty members using a questionnaire and interviews. Ben Salamh presented his research findings in terms of students’ professional, academic and pedagogical needs. A dominant trend in the revenant translation/interpreting research is what may be labelled as ‘employability needs analysis’ studies. As the name may indicate, these studies aim at assessing how the translator/interpreter education programmes can well prepare students for the labor market. Some of these studies have combined employers’ views with trainees’ ones. For example, Nasrollahi Shahri and Barzakhi Farimani (2016) assessed the needs of the MA translation students at Iranian universities. They interviewed business translation professionals and graduated students, and surveyed MA students’ perceptions using a questionnaire. By doing so, Nasrollahi Shahri and Barzakhi Farimani aimed at identifying the demands of the translation labor market and understanding the shortcomings of the translation curricula and instruction at Iranian universities. In another study, Álvarez-Álvarez and ArnáizUzquiza (2017) examined how the practical components of employability are included in the translation and interpreting curricula taught at Spanish Universities. They also compared how final-year undergraduates, employers, and graduates perceive employability skills. Álvarez-Álvarez and Arnáiz-Uzquiza used three different questionnaires that were developed to identify the skills most relevant to the competitive market and the further training university graduates need to be ready for their future workplace tasks. The student questionnaire included 16 items developed for gathering data about their personal and academic profiles, work placement experiences, and future professional goals. The employer questionnaire contained 18 items about the company profile, work placements, and student profiles. As for the graduate questionnaire, it had 44 items about issues such as their post-graduation working languages, opinions, experiences in work placements during their translation and interpreting studies, and perceptions of the translation and interpreting curricula they studied. Some other needs analysis studies have adopted a stronger employability approach by collecting data from employers only. In Hong Kong, Li (2007) looked at translation trainees’ learning needs from the views of administrators of translation/language services. Thirty-three administrators of translation/language services completed a questionnaire, and four of them were interviewed. The questions of the two data sources focused on issues related to translation practices and major considerations in the recruitment of new translators, the challenges the newly recruited encounter, and their coping strategies. Li discussed these issues in terms of their implications for translation training. Recently, Schnell and Rodríguez (2017) also assessed translation and interpreting students’ needs by exploring employers’ views on graduates’ employability assets. They developed a questionnaire based on preliminary interviews with a group of chief executive officers of translation service provider companies. Their semi-structured questionnaire was completed by 155 translation service provider employers. The questionnaire included six sections that gathered

3.4 Needs Analysis

49

data about the following issues: staff translators’ academic degrees and information about trainee translators, translator recruitment and the in-house training received since joining the company, employers’ perceptions on the importance of some knowledge and skill types and personal qualities of translation and interpreting graduates, employers’ perceptions of how internships help graduates become prepared for work, their perceptions of which knowledge and skill types should be emphasized in curriculum design and development, and employers’ views on their involvement in curriculum design. Another recent study was reported by Afolabi (2019), who addressed the needs of translation and interpreting students in Nigeria from a market needs analysis angle. He interviewed representatives of 23 potential employers in the Nigerian translation and interpreting market. He also analyzed the profiles of 19 translation and interpreting programmes in France, Canada, Cameroon, Ghana (two programmes in each), and Nigeria (n = 11 programmes). The programme profiles’ aspects he analyzed included: objectives and duration of the programme, admission and graduation requirements, programme curriculum, and teaching staff qualifications. As can be noted above, most needs analysis studies are concerned with translator education. Though some of the studies reviewed above investigated both translation and interpreting learning needs under the name ‘translation students’, the negligence of assessing students’ interpreting learning needs is clearly noted in the literature. Therefore, due attention should be paid to the needs of trainee interpreters in future research.

3.5 Learner Performance Variables The studies addressing the variables of translation and interpreting students’ performance are generally concerned with exploring what influences it, i.e., its predicators or correlates. Learner performance variables have recently gained ground in translation and interpreting studies. What characterizes the studies dealing with this area is the use of correlational and causal-comparative research designs and the involvement of a large number of participants. These studies generally depend on correlating a set of variables with students’ translation or interpreting performance. Overall, the learner performance variable studies reported so far can be grouped into three categories: personal traits, motivational variables, and academic variables studies. Some studies have attempted to identify the personality variables associated with translation and interpreting students’ performance. In a mixed-method study, Hubscher-Davidson (2009) examined the correlations between students’ translation quality and a number of their personality traits (introversion vs. extraversion, sensing vs. intuition, thinking vs. feeling, and judging vs. perceiving). She used four scales measuring these personality traits. Her quantitative data was supplemented by the data obtained from two questionnaires and observation. Rosiers and Eyckmans (2017) also compared the personality traits of three groups of advanced students

50

3 Translation/Interpreting Learning and Teaching Practices Research

willing to pursue their MA study in interpreting, translation, and multilingual communication. Specifically, they examined how these students differ in their ‘willingness to communicate, cultural empathy, social initiative, flexibility, open-mindedness and emotional stability’ (p. 29). The two researchers depended on a number of scales in measuring these personal traits. In another translation study, Araghian and Ghonsooly (2018) examined the relationship between Iranian students’ translation burnout and personality. They correlated the students’ scores on a 33-item translator burnout scale and a 44-item personality inventory. A rare study dealt with interpreting students’ personal traits was reported by Pan and Yan (2012), who combined questionnaire and focus group interview data in their investigation of 771 students’ perceived problems in interpreting learning, and the relationship between these problems and the students’ variables, including major, family background, language self-evaluations, interest, and confidence. Motivational variables influencing students’ translation and interpreting performance have also received recent research attention. For example, Yan and Wang (2012) examined the relationship between second language writing anxiety and students’ translation ability. They correlated 50 Hong Kong university students’ translation scores with their scores on a second language writing anxiety scale, and their responses to a questionnaire eliciting information on their age, gender, family’s language background, English learning experience, Chinese and English reading and writing habits, translation learning experience, and perceived language and translation ability. On the other hand, Haro-Soler (2017) investigated the sources of students’ translation self-efficacy perceptions. These sources included: teachers, peers, the teaching materials, and personal traits. She also looked at the development of students’ self-confidence beliefs during their training. Pietrzak (2018) also examined the role and nature of self-regulation in translation performance by correlating students’ translated text quality with their self-regulation skills. Some other studies have dealt with the motivational variables of interpreting students. For example, Rosiers et al. (2011) probed the relationship between students’ sight interpreting performances and their self-perceived communication competence, language ability self-perceptions, language anxiety, and integrative motivation. They gauged these motivational traits using a number of scales. Meanwhile, the students’ interpreting performance was assessed using an L1-to-L2 task, and it was rated based on two parameters: overall interpreting performance, and oral fluency. Meanwhile, Timarová and Salaets (2011) examined the association of students’ self-selection for interpreting and their interpreting success with their learning styles, motivation, and cognitive flexibility. Shaw (2011) also conducted a causal-comparative study to examine the cognitive and motivational predictors of spoken language and sign language interpreting students’ aptitude. The cognitive and motivational predictors investigated in Shaw’s study included: visual memory, concentration, and eagerness to acquire new concepts with no external rewards. In another study, Wu (2016) tried to profile interpreter trainees’ (de)motivation in the Chinese context. He collected data using two questionnaires with 120 interpreting undergraduate and postgraduate trainees, and the reflective essays completed by 40 postgraduate ones. Wu analyzed

3.5 Learner Performance Variables

51

the data to identify the relationship of the interpreting trainees’ (de)motivation with their ideal self, instrumentality, avoidance, and perceived competence. Some attention has also been paid to the academic correlates of translation and interpreting students’ performance. For example, Shaw and Hughes (2006) looked at the characteristics of sign language interpreting students, and faculty members’ perceptions of academic habits and skills, information processing, and personality traits deemed important for success in studying interpreting. A questionnaire eliciting data about these issues was completed by 1,357 sign language interpreting students and faculty members in Austrian, Canadian, British, and US universities. Recently, Quezada and Westmacott (2019) studied the relationship between translation students’ reading comprehension performance and their academic achievement. They compared the scores of undergraduate students on a test assessing their Spanish word-, sentence-, and text-level comprehension to their grades in the completed academic courses. As noted from the publication dates of the studies reviewed, it seems that translation and interpreting learner performance variables research has started to gain ground only in the past decade. Further developments in this research type can be accomplished by drawing on related writing and language education studies, particularly those dealing with oral communication apprehension and anxiety and the motivational constructs of writing such as writing apprehension, self-efficacy, self-concept, and achievement goal orientation. In other words, researchers can work on coining or developing constructs relevant to translation and interpreting motivation such as interpreting anxiety, interpreting apprehension, translation anxiety, translation self-efficacy, and translation achievement goal orientation. Due attention needs also to be paid to investigating the linguistic variables potentially associated with translation and interpreting performance, including writing fluency, speaking fluency, and target language grammar knowledge.

3.6 Classroom Practices Classroom practices research refers to the studies that deal with what is actually taking place in translation and interpreting classes. In fact, a very few studies have explored the realities of translation and interpreting teaching in worldwide contexts. In contrast, applied linguistics and language education literature is full of a countless number of studies on foreign language classroom realities. An initial step for advancing counterpart translation and interpreting research could be modeling the relevant studies in these two areas (i.e., applied linguistics and language education). Feedback provision in translation and interpreting classes is one of the classroom issues starting to gain some research attention. For example, Alfayyadh (2016) compared translation classroom feedback in a Saudi university and a US one. He collected qualitative data from eight translation instructors in the two universities and

52

3 Translation/Interpreting Learning and Teaching Practices Research

their students (n = 58). The data sources he depended on included classroom observations, interviews, and document reviews. Alfayyadh’s thematic analysis of his qualitative data included the following categories: ‘a pattern of error detection and rater variability, technology-facilitated feedback, market-oriented feedback, conflicting attitudes toward peer feedback, vague understanding of self-feedback, scarcity of feedback on feedback, and varying forms of dialogue.’ (p. IV). On the other hand, some few studies have explored feedback and assessment practices in interpreting classes. Lee (2011) explored the characteristics of students’ self-assessment of their interpreting performances as compared to their teachers’ assessment. Her aim was to identify whether or not interpreting students are able to produce reliable assessment. She depended mainly on analyzing the content of written comments given by the students and teachers. Domínguez Araújo (2019) explored the feedback perceptions and practices in three postgraduate conference interpreter training programmes. She collected her data from trainers and trainees through using individual and focus group interviews, questionnaires, and classroom observation. In her data analysis, Domínguez Araújo focused on identifying the potential divergence and convergence between the trainers and trainees’ views on the usefulness of feedback, its preferred types, and the difficulties encountered in manipulating feedback. Meanwhile, Su (2019) investigated how English–Chinese simultaneous interpreting students provide peer feedback on their peers’ performance in terms of the target text accuracy, presentation, and language quality. Eighteen students were asked to evaluate and comment on three texts interpreted by their peers. The peer evaluators independently played the recordings of the three interpreted texts and completed the evaluation forms. Su analyzed the students’ feedback quantitatively (i.e., frequency or number of the feedback comments on its dimension) and qualitatively (i.e., depth of the evaluative comments). Overall, the few number of feedback practices studies located—as shown in the above two paragraphs—indicates that this area is still neglected in translation and interpreting research. On the other hand, feedback studies in writing and applied linguistics research have gained much ground in the last three decades. The progress made in counterpart research in the language education field is reflected in the various methodological designs in the huge number of the feedback provision and practice studies published. Contrarily, the few feedback studies conducted in the translation and interpreting fields are only limited to actual classroom practices. Other classroom practices and student learning issues explored in translation and interpreting research have received scant attention. In general, we can locate single examples of studies dealing with each of these issues, including students’ motivation in language classes, attitudes toward translating a particular type of texts, and their use of machine translation tools. In an action research study, Parvaresh et al. (2019) explored student resistance or demotivation in literary translation classes. They collected their data using two questionnaires, follow-up interviews, and classroom observation. In their data analysis, Parvaresh and colleagues focused on identifying the types of student resistance, and its sources (including the instructionalrelated, contextual, motivational, and learner-related factors). On the other hand, Pisanski Peterlin (2013) compared the attitudes of trainee translators and scholars on

3.6 Classroom Practices

53

translating the scientific academic texts characterized by English as a Lingua Franca features (e.g., non-standard word order, unidiomatic collocations) as compared to the texts characterized by native-speaker academic discourse standards. She used semistructured interviews with trainee translators and scholars having extensive experience with such use in academic contexts, and a questionnaire with trainee translators. Pisanski Peterlin’s study has important implications for scientific translation teaching and the types of texts that should be used in its classes. Despite the well-noted increase in using machine translation tools, not many studied have investigated translation majors’ actual use of these tools. You (2018) explored modern Japanese learners’ use of Google Translate through using questionnaires and interviews. The two data sources were used to identify the frequency of students’ use of Google Translate as compared to the other lexical resources, its reliability, the design features they like in it, and the new functions they wish to be incorporated in this machine translation tool. Niño Alonso (2018) also explored learners’ use of Google translation while performing translation tasks. It is noteworthy that there is a dearth of the studies on translation learners’ actual use of other CAT tools. That is why there is a need for research documenting learners’ use of the various types of translation memory and terminology management software.

3.7 Trainer Education Like the case of classroom practices research, a very few studies have been published about the issues related to translator and interpreter trainer education. This may have been partially caused by the lack of academic programmes for translator and interpreter trainer education . As Gustafsson et al. (2013) observed in the Swedish context: There is no specific training for the teachers in the training programs. Often, the teacher is an interpreter with long experience or one of the regular teachers at the folk high school or study organization. The same person might teach one or several of the modules as well as supervise language and interpreting training. Courses and seminars for language supervisors and teachers are infrequently arranged. (p. 31)

In fact, the Swedish case described above is also generalizable to other international academic contexts of translation and interpreting. The few translator and interpreter trainer education studies available have addressed a number of issues, including trainers’ curriculum delivery practices, their training needs and motivation, the discrepancies between trainers’ pedagogical beliefs and practices, and the differences between the instructional practices of targetlanguage native and non-native trainers. McDermid (2009) interviewed 34 American sign language interpreting educators in Canada about their ontological beliefs with regards to curriculum design. The interviews used in this study focused on how these interpreting educators develop their curricula and deliver them, the challenges they encounter in this process, the teaching and assessment methods they use, and the

54

3 Translation/Interpreting Learning and Teaching Practices Research

literature they depend on. McDermid relied on Eisner’s (2002) framework of the explicit, implied, and null curricula in his discussion of the interview data. On the other hand, Orlando (2019) surveyed 21 translation instructors’ teaching qualifications, and explored their views on the didactic competences they need to be trained in, including: assessment and evaluation, feedback techniques, classroom techniques, preparing a lesson plan, designing learning objectives, curriculum design, using new technologies, and knowledge of market updates. In a unique study, Enríquez Raído (2018) explored translation teacher motivation and emotions through a longitudinal study that involved analyzing the teaching portfolio of the first seven years of her academic career (2006–2012). In analyzing her teaching portfolio, Enríquez Raído depended on the reflexive analysis which means ‘introspective reflection of oneself as an inquirer’ (p. 364). She discussed the following three main motivational issues in her teaching portfolio: how her positive teaching expectations and values influenced her teaching career and perceived motivational teaching beliefs positively, the correlation of her students’ positive perceptions of effective teaching and learning with the socio-affective dimensions in her teaching, and the students’ stable perceptions of her distinct trait emotions in across different translation courses. Some studies have adopted a comparative approach to investigating trainer education issues. For example, Wu et al. (2019a) compared translation trainer pedagogical beliefs and practices in the Chinese context drawing upon a mixed-method research approach. They collected quantitative and qualitative data using a questionnaire, interviews, and classroom observation. In their data analysis, they highlighted the discrepancies between the trainers’ beliefs and their instructional practices. Such discrepancies were discussed in light of internal factors (e.g., teachers’ self-efficacy and motivation) and external ones (e.g., students’ abilities, curricula and examinations). On the other hand, Pokorn (2009) compared the performance of teachers, who are native and non-native speakers of the target language when teaching course units requiring translation into language B. She depended on the classroom observation of two native speaker teachers and two non-native speakers of the target language. In her analysis of the observational data, Pokorn focused mainly on the differences between the two types of teachers in using the teacher-centered instruction and their first language in the classroom, and their reliance on the terminological tools in their teaching. Like the case of classroom practices research, translator and interpreter trainer education studies need to borrow research ideas from language teacher education research which has been increasingly flourishing in the past three decades. Such an approach is likley to help in filling the many translator and interpreter trainer education research gaps left unaddressed.

3.8 Conclusion

55

3.8 Conclusion As noted in the above sections, the development of translation/interpreting learning and teaching practices research varies from one area to another. Table 3.1 summarizes the issues researched so far in each of these areas. Reasonable progress has been made in some areas such as programme evaluation and needs analysis, whereas research on the other four ones is still evolving. Despite the different levels of the progress made in the six areas, further major developments are still needed in all of them. It is necessary to explore the status quo of translation and interpreting education policies, needs, and practices in different regions and countries. Future studies need to adopt a micro approach in investigating learning and teaching practices to inform us about what is really taking place in translation and interpreting classes. More attention should be also given to researching different types of translation and interpreting performance predictors or correlates. All the issues related to translation and interpreting trainer education are still also under-researched. That is why they deserve more attention in future studies. Methodological innovations in researching all these areas should not be also neglected. A Table 3.1 Overview of the areas and issues in translation/interpreting learning and teaching practices studies Research area

Issues researched so far

Translator and interpreter education policy

• Reviewing and describing country-specific translator and interpreter education policies

Translation and interpreting programme evaluation

• Evaluating a number of programmes in one context • Evaluating one education programme • Evaluating one course • Evaluating a particular dimension in the programme (e.g., teaching materials)

Translation and interpreting trainees’ needs analysis

• Trainee-centered needs analysis • Trainee- and trainer-centered needs analysis • Employability needs analysis

Translation and interpreting trainees’ performance variables

• Personal variables • Motivational variables • Academic variables

Translation and interpreting classroom practices

• Feedback practices • Students’ motivation • Students’ attitudes toward translating a particular type of texts and their use of machine translation tools

Translation and interpreting trainer education

• • • •

Trainer pedagogical beliefs teachers’ needs Trainer motivation Trainer pedagogical beliefs and practices Performance differences between native and non-native speaker trainers

56

3 Translation/Interpreting Learning and Teaching Practices Research

key step in accomplishing these developments will lie in drawing upon language education research, in which those interested in researching translation/interpreting learning and teaching practices can find a wider scope of relevant issues they could investigate and the methodological approaches to be used.

References A. Niño Alonso. 2018. Exploring the use of Google Translate for independent language learning. Paper presented at the conference on Google Translate & Modern Languages Education, University of Nottingham, June 29 2018. Afolabi, S. 2019. Translation and interpretation market needs analysis: towards optimizing professional translator and interpreter training in Nigeria. The Interpreter and Translator Trainer 13 (1): 104–106. https://doi.org/10.1080/1750399x.2019.1572997. Alfayyadh, H. (2016). The feedback culture in translator education: A comparative exploration of two distinct university translation programs. Ph.D. dissertation, Kent State University, USA. Álvarez-Álvarez, S., and V. Arnáiz-Uzquiza. 2017. Translation and interpreting graduates under construction: do Spanish translation and interpreting studies curricula answer the challenges of employability? The Interpreter and Translator Trainer 11 (2–3): 139–159. https://doi.org/10. 1080/1750399x.2017.1344812. Araghian, R., and B. Ghonsooly. 2018. The relationship between burnout and personality. Babel 64 (5–6): 840–864. https://doi.org/10.1075/babel.00075.ara. Ben Salamh, S.A. (2012). Second language literacy needs analysis of Saudi translation students at the college of languages. Ph.D. dissertation, Indiana University of Pennsylvania, USA. Berwick, R. (1989). Needs Assessment in Language Programming: From Theory to Practice. In The second language curriculum, ed. R.K. Johnson, 48–62. Cambridge: Cambridge University Press. Brindley, G. 1984. Needs analysis and objective setting in the Adult Migrant Education Program. Sydney: NSW Adult Migrant Education Service. Brown, J.D. 1989. Language program evaluation: A synthesis of existing possibilities. In The second language curriculum, ed. R.K. Johnson, 222–241. Cambridge: Cambridge University Press. Chambers, F. 1980. A re-evaluation of needs analysis. ESP Journal 1 (1): 25–33. Domínguez Araújo, L. 2019. Feedback in conference interpreter education: Perspectives of trainers and trainees. Interpreting 21 (1): 135–150. https://doi.org/10.1075/intp.00023.dom. Dorado, C. and P. Orero. 2007. Teaching audiovisual translation online: A partial achievement. Perspectives: Studies in Translatology 15 (3): 191–202. https://doi.org/10.1080/136700508021 53988. Eisner, E. (2002). The three curricula that all schools teach. In E. W. Eisner (Ed.), The educational imagination: On the design and evaluation of school programs (pp. 87–107). Upper Saddle River, NJ: Merrill Prentice Hall. Enríquez Raído, V. 2018. Teacher motivation and emotions vis-à-vis students’ positive perceptions of effective teaching and learning: A self-case study of longitudinal data in reflective translation pedagogy. Translation, Cognition & Behavior 1 (2): 361–390. https://doi.org/10.1075/tcb.000 16.enr. Gabr, M. (2000). Reassessing translation programs in Egyptian national universities: Towards a model translation program. MA thesis, Washington International University, USA. Gustafsson, K., E. Norström, and I. Fioretos. 2013. Community interpreter training in spoken languages in Sweden. International Journal of Interpreter Education 4 (2): 24–38. Hale, S., and U. Ozolins. 2014. Monolingual short courses for language-specific accreditation: Can they work? A Sydney experience. The Interpreter and Translator Trainer 8 (2): 217–239. https:// doi.org/10.1080/1750399x.2014.92937.

References

57

Haro-Soler, M.M. (2017). Self-confidence and its role in translator training: The students’ perspective. In I. Lacruz and R. Jääskeläinen, Innovation and Expansion in Translation Process Research. ATA Series, John Benjamins. Hubscher-Davidson, S. 2009. Personal diversity and diverse personalities in translation: A study of individual differences. Perspectives: Studies in Translatology 17 (3): 446–473. https://doi.org/ 10.1080/09076760903249380. Hutchinson, T., and A. Waters. 1987. English for specific purposes: A learning-centered approach. Cambridge: Cambridge University Press. Hutchinson, T., and A. Waters. 2002. English for specific purpose. Shanghai: Shanghai Foreign Language Education Press. Hyland, K. 2006. English for academic purposes. An advanced resource book. London: Routledge. Jaccomard, H. 2018. Work placements in Masters of translation: Five case studies from the University of Western Australia. Meta 63 (2): 532–547. https://doi.org/10.7202/1055151ar. Khoshsaligheh, M., M. Moghaddas, and S. Ameri. 2019. English translator training curriculum revisited: Iranian trainees’ perspectives. Teaching English Language 13 (2): 181–212. Kiely, R., and P. Rea-Dickins. 2005. Program evaluation in language education. Hampshire: PalgraveMacmillan. Lee, Y.H. 2011. Comparing self-assessment and teacher’s assessment in interpreter training. T&I Review 1: 87–111. Li, D. 2000. Needs assessment in translation teaching: Making translator training more responsive to social needs. Babel 46 (4): 289–299. Li, D. 2002. Translator training: What translation students have to say. Meta 47 (4): 513–531. https:// doi.org/10.7202/008034ar. Li, D. 2007. Translation curriculum and pedagogy: Views of administrators of translation services. Target 19 (1): 5–33. Li, D. 2012. Curriculum design, needs assessment and translation pedagogy, with special reference to translation training in Hong Kong. Beijing: Foreign Language Teaching and Research Press. Li, X. 2019. Analyzing translation and interpreting textbooks: A pilot survey of business interpreting textbooks. Translation and Interpreting Studies 14 (3): 392–415. https://doi.org/10.1075/tis.190 41.li. Li, P., and Z. Lu. 2011. Learners’ needs analysis of a new optional college English course— Interpreting for non-English majors. Theory and Practice in Language Studies 1 (9): 1091–1102. https://doi.org/10.4304/tpls.1.9.1091-1102. Lim, K. (2006). A comparison of curricula of graduate schools of interpretation and translation in Korea. Meta: Translators’ Journal 51 (2): 215–228. https://doi.org/10.7202/013252ar. Liu, C.F.M. 2017. Perception of translation graduates on translation internships, with mixedmethods approach. Babel 63 (4): 580–599. Long, M. 2005. Second language needs analysis. Cambridge: Cambridge University Press. Luo, J., and X. Ma. 2019. Reflection on consecutive interpreting note-taking textbooks published in China. International Journal of Applied Linguistics and Translation 5 (1): 9–14. https://doi. org/10.11648/j.ijalt.20190501.12. Lynch, B. 1996. Language program evaluation: Theory and practice. Cambridge: Cambridge University Press. Mahasneh, A. (2013). Translation training in the Jordanian context: Curriculum evaluation in translator education. Ph.D. Dissertation, Binghamton University State University of New York, USA. McDermid, D. 2009. The ontological beliefs and curriculum design of Canadian interpreter and ASL educators. International Journal of Interpreter Education 1: 7–32. McDonough, J. 1984. ESP in perspective: A practical guide. London: Collins ELT. Mo, Y., and S. Hale. 2014. Translation and interpreting education and training: Student voices. International Journal of Interpreter Education 6 (1): 19–34. Munby, J. 1978. Communicative syllabus design. Cambridge: Cambridge University Press.

58

3 Translation/Interpreting Learning and Teaching Practices Research

Mutlu, O. 2004. A needs analysis study for the English-Turkish translation course offered to management students of the Faculty of Economic and Administrative Sciences at Ba¸skent University. M.A. thesis, Middle East Technical University, USA. Nasrollahi Shahri, N., and Z. Barzakhi Farimani. 2016. A students’ needs-analysis for translation studies curriculum offered at master’s level in Iranian universities. Research in English Language Pedagogy 4 (1): 26–40. Ordóñez-López, P. 2015. A critical account of the concept of ‘basic legal knowledge’: Theory and practice. The Interpreter and Translator Trainer 9 (2): 156–172. https://doi.org/10.1080/175 0399x.2015.1051768. Orlando, M. 2019. Training and educating interpreter and translator trainers as practitionersresearchers-teachers. The Interpreter and Translator Trainer 13 (3): 216–232. https://doi.org/ 10.1080/1750399x.2019.1656407. Pan, J., and J. Yan. 2012. Learner variables and problems perceived by students: an investigation of a college interpreting programme in China. Perspectives: Studies in Translatology 20 (2): 199–218. https://doi.org/10.1080/0907676x.2011.590594. Parvaresh, S., H. Pirnajmuddin, and A. Hesabi. 2019. Student resistance in a literary translation classroom: A study within an instructional conversion experience from a transmissionist approach to a transformationist one. The Interpreter and Translator Trainer 13 (2): 132–151. https://doi. org/10.1080/1750399x.2018.1558724. Pietrzak, P. (2018). The effects of students’ self-regulation on translation quality. Babel 64 (5–6): 819–839(21). https://doi.org/10.1075/babel.00064.pie. Pinto, M., and D. Sales. 2007. A case study research for user-centred information literacy instruction: information behaviour of translation trainees’. Journal of Information Science 33 (5): 531–49. Pinto, M., and D. Sales. 2008. Towards user-centred information literacy instruction in translation. The Interpreter and Translator Trainer 2 (1): 47–74. https://doi.org/10.1080/1750399x.2008.107 98766. Pisanski Peterlin, A. 2013. Attitudes towards English as an academic Lingua Franca in translation. The Interpreter and Translator Trainer 7 (2): 195–216. https://doi.org/10.1080/13556509.2013. 10798851. Pokorn, N. 2009. Natives or non-natives? That is the question…teachers of translation into language B. The Interpreter and Translator Trainer 3 (2): 189–208. https://doi.org/10.1080/1750399x. 2009.10798788. Pym, A. 2013. Research skills in translation studies: What we need training in. Across Languages and Cultures 14 (1): 1–14. https://doi.org/10.1556/Acr.14.2013.1.1. Quezada, C., and A. Westmacott. 2019. Reflections of L1 reading comprehension skills in university academic grades for an undergraduate translation programme. The Interpreter and Translator Trainer 13 (4): 426–441. https://doi.org/10.1080/1750399x.2019.1603135. Rea-Dickens, P., and K.P. Germaine (eds.). 1998. Managing evaluation and innovation in language teaching: Building bridges. London: Longman. Richards, J. 2001. Curriculum development in language teaching. Cambridge: Cambridge University Press. Richards, J., and R. Schmidt. 2010. Longman dictionary of language teaching and applied linguistics. Pearson Education Limited. Rosiers, A., J. Eyckmans, and D. Bauwens. 2011. A story of attitudes and aptitudes? Investigating individual difference variables within the context of interpreting. Interpreting 13 (1): 53–69. https://doi.org/10.1075/intp.13.1.04ros. Rosiers, A., and J. Eyckmans. 2017. Birds of a feather? A comparison of the personality profiles of aspiring interpreters and other language experts. Across Languages and Cultures 18 (1): 29–51. https://doi.org/10.1556/084.2017.18.1.2. Rothwell, A., and T. Svoboda. 2019. Tracking translator training in tools and technologies: Findings of the EMT survey 2017. The Journal of Specialised Translation 32: 26–60. Salmi, L., and T. Kinnunen. 2015. Training translators for accreditation in Finland. The Interpreter and Translator Trainer 9 (2): 229–242. https://doi.org/10.1080/1750399x.2015.1051772.

References

59

Schjoldager, A., K. Rasmussen, and C. Thomsen. 2008. Précis-writing, revision and editing: Piloting the European master in translation. Meta: Translators’ Journal 53 (4): 798–813. https://doi.org/ 10.7202/019648ar. Schnell, B., and N. Rodríguez. 2017. Ivory tower vs. workplace reality: Employability and the T&I curriculum–balancing academic education and vocational requirements: A study from the employers’ perspective. The Interpreter and Translator Trainer 11 (2–3): 160–186. https://doi. org/10.1080/1750399x.2017.1344920. Shaw, S. 2011. Cognitive and motivational contributors to aptitude: A study of spoken and signed language interpreting students. Interpreting 13 (1): 70–84. https://doi.org/10.1075/intp.13.1. 05sha. Shaw, S., and G. Hughes. 2006. Essential characteristics of sign language interpreting students: Perspectives of students and faculty. Interpreting 8 (2): 195–221. https://doi.org/10.1075/intp.8. 2.05sha. Solová, R. 2015. The polish sworn translator: Current training profile and perspectives. The Interpreter and Translator Trainer 9 (2): 243–259. https://doi.org/10.1080/1750399x.2015.105 1773. Stern, L., and X. Liu. 2019. See you in court: How do Australian institutions train legal interpreters? The Interpreter and Translator Trainer 13 (4): 361–389. https://doi.org/10.1080/1750399x.2019. 1611012. Su, W. 2019. Interpreting quality as evaluated by peer students. The Interpreter and Translator Trainer 13 (2): 177–189. https://doi.org/10.1080/1750399x.2018.1564192. Tao, Y. 2019. The development of translation and interpreting curriculum in China’s mainland: A historical overview. In Translation Studies in China. New Frontiers in Translation Studies, ed. Z. Han and D. Li. Singapore: Springer. Timarová, S., and H. Salaets. 2011. Learning styles, motivation and cognitive flexibility in interpreter training: Self-selection and aptitude. Interpreting 13 (1): 31–52. https://doi.org/10.1075/intp.13. 1.03tim. Tomassini, E. 2012. Healthcare interpreting In Italy: Current needs and proposals to promote collaboration between universities and healthcare services. The Interpreters’ Newsletter 17: 39–54. West, R. 1994. Needs analysis in language teaching. Language Teaching 27 (1): 1–19. Williamson, A. 2016. Lost in the shuffle: Deaf-parented interpreters and their paths to interpreting careers. International Journal of Interpreter Education 8 (1): 4–22. Wu, Z. 2016. Towards understanding interpreter trainees’ (de)motivation: An exploratory study. Translation & Interpreting 8 (2): 13–25. Wu, D., L. Jun Zhang, and L. Wei. 2019a. Developing translator competence: Understanding trainers’ beliefs and training practices. The Interpreter and TranslatorTrainer 13 (3): 233–254. https://doi.org/10.1080/1750399x.2019.1656406. Wu, D., L. Wei, and A. Mo. 2019b. Training translation teachers in an initial teacher education programme: A self-efficacy beliefs perspective. Perspectives 27 (1): 74–90. https://doi.org/10. 1080/0907676x.2018.1485715. Yan, J., and H. Wang. 2012. Second language writing anxiety and translation: Performance in a Hong Kong tertiary translation class. The Interpreter and Translator Trainer 6 (2): 171–194. https://doi.org/10.1080/13556509.2012.10798835. You, Z. 2018. British university students’ attitude and usage of Google translate (L2 Japanese). Paper presented at the conference on Google Translate & Modern Languages Education, University of Nottingham, June 29 2018.

Chapter 4

Translation and Interpreting Assessment Research

Abstract This chapter highlights translation and interpreting assessment research. The chapter covers seven key areas of this translator and interpreter education research type. These areas are: surveying translation and interpreting assessment practices, validating translation/interpreting tests, identifying the difficulty level of the source text, developing performance assessment rubrics, examining rating practices and testing conditions, developing translation/interpreting motivational scales, and investigating user evaluation/reception. In the sections covering these areas, the author discusses the main research issues addressed and highlights exemplary studies representing them. Based on the overview given in this chapter, it is generally concluded that more issues have been researched on assessing interpreting than translation. Keywords Translation research · Translation assessment research · Interpreting assessment research · Translation testing · Interpreting testing · Translation scales

4.1 Introduction: Methodological Approaches Assessment is part and parcel of the translator and interpreter education process. Each stage in such a process requires implementing a particular type (or types) of assessment based on which the educator takes further decisions related to admitting candidates to the programme, developing the programme and curriculum structure, or meetig trainees’ needs. Sawyer (2004) explains the importance of assessment in the education process as follows: High quality education is based upon sound assessment. In effective instructional programmes, assessment provides convincing evidence to the participants that the curriculum goals and objectives are being met. … Assessment and testing therefore have a pervasive role in educational enterprises; assessment has an integrative function. Strengthening the linkages between curriculum and assessment… therefore improves the quality of the educational program….[T]he importance of valid and reliable forms of assessment transcends the learner, as crucial as validity and reliability are for a student in an educational programme. Reaching far beyond individual decisions concerning programme entry, degree-track selection, and degree conferral, assessment provides invaluable feedback on learning and instruction for an entire program of study and serves as a basis for its evaluation– without valid and reliable © Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_4

61

62

4 Translation and Interpreting Assessment Research assessment, the success of a program cannot be gauged accurately. Hence, valid and reliable forms of assessment inform the process of curriculum design and implementation. (pp. 5–7)

Given this, assessment research has important implications for translator and interpreter education. It informs translator and interpreter educators about appropriate testing practices, and could help them in identifying candidates’ ability to successfully complete the target training programme, tailoring training based on diagnosing trainees’ performance levels and problems, and restucturing the components of the training programme. In this chapter, the author highlights the research addressing a number of issues in translation and interpreting assessment. Overall, the translation and interpreting assessment research conducted so far has covered the following seven main areas: surveying translation and interpreting assessment practices, validating translation/interpreting tests, identifying the difficulty level of the source text, developing performance assessment rubrics, examining rating practices and testing conditions, developing translation/interpreting motivational scales, and investigating user evaluation. In the following sections, research dealing with these issues is discussed and reviewed. As will be noted, translation and interpreting assessment research is characterized by its quantitative nature. The very vast majority of the studies highlighted in the sections below depended on quantitative data sources such as questionnaires, tests, rating rubrics, and psychological scales. In a very few studies, the quantitative data was combined with interviews.

4.2 Surveying Translation and Interpreting Assessment Practices A main area of translation and interpreting assessment research is surveying assessment policy practices. Assessment survey research is characterized by its broad nature. Some assessment survey studies have covered the translation and/or interpreting assessment policy practices in one country, whereas others have been crosscultural. An example of these cross-cultural assessment practices survey studies is the one conducted by Timarová and Ungoed-Thomas (2008), who surveyed the admission testing practices in 18 university interpreting schools (16 Western, Central and Eastern European, and two from outside Europe). Specifically, they focused on the types and the efficiency of the admission tests the schools use, and the skills these tests assess. They used a questionnaire written in English to collect data about interpreting programme information (e.g., admitted students, graduates, etc.) and the content of its admission tests, the potential constraints in the testing procedure, and the ideal candidate profile. The questionnaire was completed by representatives from these schools. Timarová and Ungoed-Thomas found that the respondents from the surveyed schools had reasonable consensus on the admission tests used (e.g., using short consecutive interpreting tests), and the improvement needed in admission testing procedures.

4.2 Surveying Translation and Interpreting Assessment Practices

63

Another trend in assessment survey interpreting research is investigating accreditation testing in some contexts. For example, Hlavac (2015) surveyed community interpreting credentialing systems in Australia, Canada, Norway, and the UK. Hlavac’s aim was to identify ‘desirable and target attributes of community interpreter performance and in terms of the harmonization of cross-national systems in a globalized and highly mobile language services industry’ (p. 22). He reviewed the following issues in the four countries: characteristics of certification and certification candidates, components of certification testing systems, and attributes of certification and training. Hlavac also discussed these issues in light of the Interpreting–Guidelines for Community Interpreting or the International Standards Organization (ISO). Some studies have dealt with authenticity in accreditation tests. Authenticity concerns the correspondence between the tasks of a given test and the target real-life tasks (Bachman and Palmer 1996: 23). According to Angelelli (2009), ‘authenticity is the term that the testing community uses to talk about the degree to which tasks on a test are similar to, and reflective of a real world situation towards which the test is targeted’ (pp. 20–21). Chen (2009) investigated the authenticity in accreditation tests for interpreters in China by comparing the characteristics of four representative tests in China to professional interpreting tasks. Chen argues for the importance of considering authenticity in interpreter accreditation tests as follows: The accreditation test should therefore be designed in a way that the test taker’s knowledge and skills be engaged in meeting these professional standards, otherwise we have no justification for using the test scores for making decisions as to whether individuals can perform as professional interpreters. Since correspondence between the characteristics of target language use tasks and those of the test task is at the heart of authenticity, an authentic certification test will have to establish the domain of the practice of the profession, define the knowledge and skills required by a professional interpreter, and engage a mix of abilities and strategies that would be called on in a real-world interaction… An authentic certification test ensures that the criteria for evaluating the test performance correspond to those used for assessing real-life interpreting tasks. (p. 265).

As noted above, the assessment policy practices research reviewed is mainly dominant in interpreting. Specifically, this research has covered interpreter admission and accreditation policies in some contexts. Relevant translation studies are very scarce. One of these scarce studies was reported by Li (2006), who investigated the translation testing practices in China through using a 27-item questionnaire and semistructured interviews with translation teachers. The questionnaire was completed by 95 translation teachers whereas 12 teachers were interviewed. The questionnaire and interview data in Li’s study is related to the following issues: the frequency of testing in university translation classes, the perceived importance and role of translation testing, design and content of the translation tests used, teachers’ approaches to grading translation papers, and their attitudes to and satisfaction with current translation testing.

64

4 Translation and Interpreting Assessment Research

4.3 Validating Translation/Interpreting Tests The test validation process entails a number of steps in developing the test or the measure and ensuring it assesses what it is supposed to assess. Thus, verifying the psychometric characteristics of translation and interpreting tests is of utmost importance to diagnosing translator and interpreter performance and competence levels, and in turn, advancing their education. Talking about the importance of test validation in the interpreting field, Sawyer (2004) states that: The personal, institutional, and professional consequences of test validation therefore have a direct impact on processes of professionalization in the field of language interpreting. Professionalism implies the ability to articulate to students and clients what constitutes a good or bad interpreting performance, and in a broader sense, why professional, high-caliber translation and interpretation services are mandated in specific situations. Sound education is based upon sound assessment practices, which in turn entails an ongoing process of validation. And if validation is a rhetorical art, it is one at which the community of interpreter educators should excel. (p. 235)

Like the case of assessment practices survey research, test development/validation studies have been much more dominant in the interpreting field rather than the translation one. It is worth noting that most of these interpreting studies have mainly addressed the process of validating admission/aptitude tests. According to Chabasse and Kader (2014), accurate admission testing has become a basic step in screening the increasing number of applicants to interpreting university programmes. The main potential reason for this is the complexity of the interpreter’s roles and the multiple dimensions it involves. Therefore, assessing candidates’ aptitude to performing such complex and multidimensional role is an important issue. Meanwhile, the studies dealing with interpreting accreditation and competence testing are much fewer in number. As for translation testing validation research, this has been limited to the back-translation studies. A main trend in validating interpreting aptitude/admission tests is relating candidates’ scores on these tests to the scores of trainees attending a particular interpreting programme on a similar measure. For example, Pöchhacker (2009) developed an interpreting aptitude test that combines an auditory cloze exercise with a task demanding high expressional fluency. Pöchhacker validated the test in four rounds which involved 120 undergraduate students and found that the test efficiently discriminates their different interpreting levels and correlates with their intralingual consecutive interpreting performance as measured by a final-course exam. Other approaches used by previous studies in developing and validating interpreting aptitude/admission tests include exploring trainees’ perceptions about skill importance. This approach has been used by Hlavac et al. (2012), who developed an intake test designed for trainee candidates applying for short interpreting training in Australia. They evaluated the test using a questionnaire with a group of potential trainees to examine their perceptions of the test. They also used another questionnaire with the trainees upon their completion of the training to explore how they rate the importance and relevance of the 10 components of the test to admission to training. Some other few studies have followed a comparative approach to examining the validity of some interpreting aptitude/admission tests. An example of these studies is the one reported by Timarová and Ungoed-Thomas (2009), who investigated the predictive validity of a group of interpreting admission tests at a European university.

4.3 Validating Translation/Interpreting Tests

65

They related the sores of 184 candidates on the admissions tests to their scores on the final simultaneous and consecutive interpreting exams. Another study was reported by Chabasse and Kader (2014), who compared three aptitude tests (Pöchhacker’s SynCloze test, Chabasse’s cognitive shadowing test, and Timarová’s personalized cloze test) by correlating the scores obtained by the students at the beginning of a 2-year conference interpreting programme with their exam grades at the end of the second term. They also compared the three tests in terms of their practical feasibility, and format and content, and found that Chabasse’s cognitive shadowing test is the most appropriate one. It is worth mentioning that some researchers have called for developing interpreting aptitude/admission tests that cover multiple skill dimensions. Timarová and Ungoed-Thomas (2009), for instance, criticized the early developed aptitude/admission tests in merely focusing on measuring candidates’ cognitive and linguistic capabilities: Despite the multitude of factors listed as relevant for skills acquisition, the literature in conference interpreter education discusses admissions tests almost exclusively in terms of aptitude, often limited to cognitive and linguistic abilities, such as ability to analyze text, verbal fluency, and memory…. Since aptitude tests are at the center of admissions testing for conference interpreter education programs, it is pertinent to consider what constitutes aptitude for interpreting. While the term ‘aptitude’ is used frequently in interpreting studies literature … authors hardly ever provide a definition for aptitude. It is usually understood as a pre-requisite for education. However, the pre-requisites are based almost exclusively on intuition and the experience of educators. (pp. 227–228)

Arguably, Timarová and Ungoed-Thomas’s above note also applies to some recently developed interpreting aptitude/admission tests. Most sign language interpreting aptitude/admission tests validated are unique in being concerned with other skill dimensions. Bontempo and Napier (2009), for instance, developed an admission test based on the needs analysis of signed language interpreting students in Australia. The test consists of a number of items covering skills such as: fluency in written English, fluency in receptive and productive Auslan, presentation skills, shadowing skills (e.g., selective attention, ability to ‘listen and speak’ simultaneously in Auslan), paraphrasing/identification of main ideas, dual task performance, and consecutive interpreting, and individual traits. The test was piloted with 18 applicants to a graduate interpreting programme, and the admission scores of the successful applicants were compared to their final examination scores. In a recent study, Garrett and Girardin (2019) investigated students’ aptitude for interpreting learning by assessing the expressive competence of applicants to a 4year American sign language interpreting study programme. They compared the scores of the American sign language interpreting trainee candidates’ expressive competence as measured by their scores on a pre-screening measure to the scores obtained by those who completed a 4-semester study of the same major. Apart from these aptitude/admission tests, some other few studies focused on validating interpreting accreditation and competence tests. The common characteristic in these tests is their dependence on authentic tasks. In the medical interpreting area, Angelelli (2007) reported a study on the development of a measure of medical

66

4 Translation and Interpreting Assessment Research

interpreting skills. She relied on analyzing authentic medical interpreter-mediated encounters in order to identify the main linguistic and interpreting skills used in healthcare settings, and these were used to create scripts forming the core of the tests developed for an interpreter training programme. Angelelli validated these scripts using focus group interviews with community members, interpreters, and healthcare providers. The scripts were also video recorded, and field tested and piloted in health interpreting settings. On the other hand, Vermeirenet al. (2009) also developed a professional profile for social interpreters in Belgium, and their certification exam graders. Sign language interpreting accreditation and competence tests also resemble the above-mentioned admission ones in being concerned with other skills. For example, the competence tests Seal (2004) used focus on personality traits, visualmotor skills, and abstract reasoning. Likewise, the sign language interpreting test battery developed by Lopez Gomez et al. (2007) examines testees’ perceptual-motor coordination, cognitive, and personality traits. In the Canadian context, Russell and Malcolm (2009) designed a comprehensive and responsive certification test in light of their review of the certification for American sign language-English interpreters. This test is based on authentic tasks and subtasks interpreters are to perform. Measure validation studies in the translation field have been particularly common with regard to translated research instruments. Assessing the quality of translated or multilingual research instruments has become an essential process in crosscultural studies (Colina et al. 2017). A very common method used in validating research instruments is back-translation. Colina et al. (2017) define this method as the ‘translation (target text [TT1]) of the source text (ST), (b) translation (TT2) of TT1 back into the source language, and (c) comparison of TT2 with ST to make sure there are no discrepancies’ (pp. 267–268). Brislin’s (1970) work is perhaps the earliest one on back-translation. Empirical studies depending on back-translation in validating translated research instruments include the ones reported by Guillemin et al. (1993), Cella et al. (1998), Eremenco et al. (2005), Weinstein et al. (2015), and Phongphanngam and Lach (2019). For a theoretical discussion of the issues related to back-translation, see Tyupa (2011). Apart from this back-translation validation research, the studies validating translation tests are very scarce. For example, Stansfield et al. (1992) validated the 28-item Spanish into English Verbatim Translation Exam (SEVTE) and identified the constructs it assesses through using two forms of it.

4.4 Identifying the Difficulty Level of the Source Text According to Hale and Campbell (2002), the level of source text difficulty is an important criterion for choosing translator/interpreter training and examination materials. Sun and Shreve (2014) explain the importance of determining the difficulty level of the source text in translation as follows:

4.4 Identifying the Difficulty Level of the Source Text

67

Research in translation difficulty can contribute greatly to our understanding of the translation process in terms of relationships between text characteristics, translator behaviors, and translation quality. Finding a way to measure translation difficulty will help translation teachers prepare properly leveled passages for translation exercises and language service providers have a better idea of the translation difficulty level of the materials. (p. 122)

In the early related studies, Campbell (1999) and Campbell and Hale (1999) and Hale and Campbell (2002) proposed assessing the difficulty of the source text using a number of different renditions made in translating a particular textual item. Their rationale is that different renditions indicate more cognitive efforts exerted in translating the item. However, Hale and Campbell (2002) found no clear relationship between the alternative renditions generated and the accuracy of the translated text. In sight translation, Wu (2019) studied the text characteristics associated with perceived task difficulty. In this study, 29 undergraduate interpreters sight-translated six texts with different characteristics. The students’ accuracy and fluency aspects in performing these tasks were correlated with the sophisticated word type, mean length of t-units, and the lexical and the syntactic variables in the texts interpreted. Meanwhile, Wu collected data about the students’ perceptions of text difficulty from their reflective essays which were subjected to a thematic analysis. Influenced by reading research, some few translation and interpreting studies have attempted to examine how readability indices can validly predict source text difficulty and complexity. Though text readability has been studied for a long time in the applied linguistics field (for a review, see Zamanian and Heydari 2012), it has received little attention in translation and interpreting studies. The few translation and interpreting readability studies were conducted by Jensen (2009), Liu and Chiu (2009), and Sun and Shreve (2014). Jensen (2009), who tested the validity of readability indices, word frequency, nonliteral expressions (i.e., idioms, metaphors, and metonyms) in determining source text complexity, concludes that ‘difficulty and level of complexity cannot be assumed to be synonymous, but it is argued that we can go some way towards predicting the probable degree of difficulty of a text by employing a battery of objective measures. The extent to which the two concepts overlap will obviously require further testing’ (p. 77). Jensen raises some questions about whether or not readability indices can potentially predict source text complexity. Sun and Shreve (2014) also tried to assess the difficulty of source texts through using a multidimensional scale of mental workload along with translation quality scores, time spent on the task and the readability formula. Their data consisted of short English–Chinese translation passages, and a pre-/post-translation questionnaire for rating task difficulty. While Sun and Shreve found that task cognitive load is a reliable scale of its difficulty, translation quality scores were not found to predict text quality. As for the time spent on the task and the readability formula, these were found to weakly relate to translation task difficulty. Sun and Shreve’s (2014) study revealed that ‘as the readability level increased, translation difficulty score tended to decrease. However, the association between the two variables was weak. In other words, readability formulas cannot predict well the translation difficulty level of a text’ (p. 117). They ascribed the lack of this relationship

68

4 Translation and Interpreting Assessment Research

to the fact that readability formulas are mainly concerned with text comprehension, and that they were mainly developed in L1 contexts; thus they may not be suitable to L2 learners. With regard to interpreting, Liu and Chiu (2009) tried to identify the indicators predicting consecutive interpreting source text difficulty. The indicators tested in their study included: Flesch text readability formula, propositional analysis of information and new concept density, expert judgment (i.e., professional interpreters, interpreter trainers, and language testing experts), and student interpreters’ rating. The results of their study revealed that the quantitative indicators did not statistically predict source text difficulty, though information density and sentence length were found to be potential predictors.

4.5 Developing Performance Assessment Rubrics Another main strand in translation and interpreting assessment research has been concerned with developing rubrics for rating performance. Assessment rubrics can be defined as ‘scoring guides, consisting of specific pre-established performance criteria, used in evaluating student work on performance assessments’ (Mertler 2000, p. 1). Assessment rubrics are generally classified as holistic and analytic. Holistic rubrics give a testee a single and overall assessment score according to the identified criteria and they define performance levels rather superficially. As for analytic rubrics, they assign a separate score to different performance dimensions along with the total performance score; thus, they depend on more detailed assessment guides. The main problem encountered in developing rubrics for rating translator and interpreter performance is the lack of agreed-upon definitions of the skill(s) to be assessed, i.e., the construct. Angelelli (2009) explicates this issue in translation assessment as follows: One of the first and most important steps in designing an assessment instrument is the definition of the construct. A construct consists of a clearly spelled out definition of exactly what a test designer understands to be involved in a given skill or ability. This task not only involves naming the ability, knowledge, or a behaviour that is being assessed but also involves breaking that knowledge, ability or behaviour into the elements that formulate a construct (Fulcher, 2003) and can be captured and measured by a rubric. … [T]here is a continuing debate about how to define translation competence and exactly how its constituent elements are to be conceptualized, broken down, interconnected and measured. (p. 13)

Orlando (2011) also refers to two other thorny issues which are the planning and methodology of assessment: The criteria chosen for an assessment task can also vary from one assignment to another and depend again on the object and objectives of the evaluation. When planning the evaluation of a translation, these should be clearly defined and explained and one should always consider what elements are evaluated. Is the objective the overall comprehension of the ST or the overall transfer from the ST to the TT?… The methodology of the evaluation is another unstable element which complicates the task of the assessor and must be thoroughly reflected

4.5 Developing Performance Assessment Rubrics

69

upon when designing evaluation tools. Once the object is clearly identified, the next element to consider is the way the evaluation is carried out. Does the assessor compare the ST and the TT? How? Are the factors used objective or subjective? How is the difference marked and the factors weighted? (pp. 298–299)

Not many rubrics or rating scales have been developed for assessing translator and interpreter performance. Overall, more rubrics have been developed for rating translation than interpreting performance. Early quality assessment works focused on discussing linguistic error types in interpreting and translation (e.g., Barik 1971; Kopczy´nski 1980; Séguinot 1989). Though linguistic errors represent the main component in translation and interpreting assessment, error analysis is viewed in this book as an area closer to product rather than assessment research. Therefore, the issue of researching translation and interpreting errors is discussed in detail in Chap. 6. Waddington (2001) tested the validity of four assessment methods of students’ translation. These methods are: error analysis, the weight of errors on translation quality, holistic assessment, and error analysis combined with a holistic assessment. Waddington verified the validity of the four methods through correlational analyses, whereas their criterion-related validity was determined by relating them to the translation competence factor. Most of the translation quality rubrics developed so far are analytic ones. Exceptional cases include Bahameed’s (2016) holistic rating scale designed for assessing students’ translations. Meanwhile, some of these rubrics were developed for rating particular genres, including legal translation (Phelan 2017) and medical translation (Colina 2008). In two published reports, Colina (2008) validated a tool for rating the quality of translated medical texts. The rating tool she developed is based on the functions of the translated text and the separate aspects or components of translation quality, and includes descriptors for evaluating aspects such as textual and functional adequacy and the relationship of the source text with the target-language norms. In her (2009) study, Colina provided further evidence for the reliability of the same evaluation tool through engaging a group of 30 translators and teachers in using it for rating 4–5 translated texts. The results of this later study are generally consistent with those of Colina’s (2008) one. Presas (2012) reported an attempt to adapt an assessment benchmark and rubric for rating students’ translation quality at a Spanish university. Her four assessment benchmark includes four criteria (cohesion and coherence, procedures, and presentation), whereas the analytical rubric is concerned with rating students’ pre-translation, text translation, and reflection activities at three performance levels (fail, good–very good, and excellent). The rubric development attempt reported by Presas is not a standardized one because she merely depended on a questionnaire given to the students at the end of the term to evaluate the assessment used. Responding to the diversity of translation assessment criteria and objectives, Orlando (2011) provided two evaluation tools that were used in evaluating students’ translation performance at an Australian university. The first assessment tool or grid measures translation from a textual and function angle by focusing on aspects

70

4 Translation and Interpreting Assessment Research

such as overall comprehension of source text, overall translation accuracy, omissions/insertions, terminology/word choices, and errors. As for the second grid, it is concerned with the translation process features along with the product ones. Varied approaches have been used to developing the scarce rubrics of interpreting performance. J. Lee (2008) developed a rating scale for assessing interpreting performance based on her review of interpreting and second language literature, and her understanding of the interpreting quality levels. The scale assesses interpreting accuracy, quality, and delivery. Two groups of experienced professional and novice interpreters used the scale to rate the students’ consecutive interpreting performance and provided their feedback on its three dimensions. J. Lee calculated the intraclass correlation coefficients of the three dimensions of the scale. On the other hand, Jacobson (2009) developed a rubric for evaluating interpreters’ competence in healthcare settings. The rubric assesses interpreters’ mediated interaction or competence in discourse management and it is based on interactional sociolinguistics and conversation analysis. It categorizes interpreters’ discourse management performance into a continuum of four levels: superior, advanced, fair, and poor. S.-B. Lee (2015) also developed an analytic scale for rating undergraduate students’ consecutive interpreting performance. The scale was developed through three stages: (a) reviewing the literature and identifying 42 criteria for the scores of their content, form, and delivery in interpreter performance; (b) rating these criteria by 31 interpreter trainers in terms of their importance to interpreting performance; a process resulting in reaching a refined 22-criteria scale composed of seven criteria for both content and form and eight ones for delivery; and (c) determining the weighting values of the draft 22criteria scale by using it for rating 33 consecutive interpretations produced by Korean undergraduate students, and calculating the statistical analyses of these assessments. As noted above, researchers have followed different approaches to developing translation and interpreting rubrics. Concurring with Orlando (2011) and Angelelli’s (2009) above views, the components of the rubrics developed in each field also vary as a result of the different translation and interpreting tasks. Additionally, some fields are still under-researched such as sight translation and sign language interpreting. Resolving these inconsistencies and addressing these gaps require much future research.

4.6 Examining Rating Practices and Testing Conditions While the studies reviewed in the above section have dealt with developing performance rating scales, other studies have focused on examining translation/interpreting rating practices and testing conditions. Four key issues were addressed in this latter type of studies: inter-rater reliability, performance features influencing rating, raters’ processes or behaviors, and the influence of assessment modes. Inter-rater reliability generally means the degree of agreement among raters, and the extent to which their ratings are homogeneous or consistent. A very few translation and interpreting assessment studies have probed the issue of inter-rater

4.6 Examining Rating Practices and Testing Conditions

71

reliability. Colina (2008), for instance, examined the inter-rater reliability of her medical translation rating tool, and the influence of raters’ backgrounds on their use of the rubric. She involved a group of bilinguals, professional translators, and language teachers in rating the translated medical texts using the designed rubric. Those raters completed a questionnaire that includes questions about their rating experience with the tool (i.e., their perceptions of the training materials, and use of the rating tool). This study showed a good inter-rater reliability level of the tool, and the similarity between ratings of the teachers and translators. Colina concludes that the process of ‘determining the qualifications of an ‘ideal’ rater would have to be based on sub-competences (e.g. meta-linguistic awareness, knowledge of professional norms, formal writing proficiency in target and source language) and not on group membership’ (p. 121). In another study, Eyckmans et al. (2009) compared the inter-rater reliability of holistic versus analytic ratings of students’ translations. Two trained raters marked 113 translated texts using a holistic rubric and two other trained ones marked them using an analytic rubric. They found that using the analytic rubric led to a higher degree of inter-subjective agreement between raters. Some other few studies have looked at the features influencing performance rating. It is worth to note, however, that the studies located were only related to interpreting. This is perhaps ascribed to the more complex communicative features in interpreter performance such as temporal variables and fluency. One study in this research strand was reported by Yu and van Heuven (2017), who examined the correlations between the judged fluency of consecutive interpreters’ performance with judged accuracy, and the objective measures of oral fluency (i.e., the number of filled pauses, articulation rate, and mean length of pauses). They involved 10 raters in rating accuracy and fluency in the consecutive interpretations made by 12 trainees. Drawing upon the speech tool PRAAT in analyzing the recorded interpretations along with the dysfluency aspects indicated by the raters, Yu and van Heuven identified and calculated 12 acoustic measures of interpreting fluency. They ran correlational analyses between the raters’ judged accuracy and fluency, and between their rated fluency and the objective fluency indicators. In a more recent study, Zhang and Song (2019) investigated the relationship between student interpreters’ use of selfrepairs and interpreting quality assessments. They related interpreters’ overt and covert self-repairs with the scores of their interpreting quality. The third group of studies have investigated the processes raters use in particular testing conditions. S.-B. Lee (2019) examined the ways trainers rate students’ interpreting performance holistically. He asked four interpreter trainers to verbalize their thoughts while assessing six consecutive interpreting recordings. He triangulated this think-aloud data with reflective reports, interviews, and video recordings of their computer screen activity. In analyzing this data, S.-B. Lee focused on the procedures the trainers followed, the aspects they focused on, and the criteria they relied on while rating interpreting performance. Conde (2012) also explored the behaviors of demanding and lenient raters of students’ translation. These behaviors include the comments, scope, and nature of rating, praise, and criticism. In the interpreting field, Han (2015) used a multifaceted Rasch measurement to determine raters’ severity/leniency assessments of simultaneous interpreting. Nine trained raters

72

4 Translation and Interpreting Assessment Research

evaluated English-to-Chinese simultaneous interpretations made by 32 interpreters (n = 4 each) using three 8-point scales focusing on information content, fluency, and expression. The scores given were subjected to multifaceted Rasch measurement analysis which enabled Han to examine raters’ severity and their biased interactions with the assessment criteria and interpreters. Research on trainees’ rating or peer evaluation of translation/interpreting performance is very scant. One of the scarce studies was reported by S.-B. Lee (2017) who examined students’ experience with scale-referenced peer assessment activities in consecutive interpreting examination. According to S.-B. Lee, ‘peer assessment as a phenomenon experienced by students has been studied to verify their evaluative judgments about its psychometric qualities’ (p. 1017). Three participants were trained in peer assessment, and were then engaged in peer feedback activities in two semesters. Lee’s study relied on the pre- and post-examination journal written by Korean undergraduate students and on interviewing them. The data of the study were discussed in terms of students’ experiences and perceptions of peer assessment activities. This final group of studies in this assessment area is related to comparing the influence of testing or assessment modes on testees’ performance. The two studies conducted by Mulayim (2012) and Ding (2017) are examples of this research type. Mulayim (2012) investigated whether the testing mode (i.e., audio, video, and livesimulated modes) influences interpreting students’ scores on accreditation tests. The five students who took part in this study were asked to perform two dialogue interpreting tasks under each mode (total number = six dialogue interpreting tasks performed by each student). They were video-recoded while performing all the tasks. The students’ interpreting performances in each task mode was rated holistically by examiners. Other data was also collected through using a short questionnaire and semi-structured interviews to explore their experiences and feelings under each testing mode. Mulayim discussed the results of his research in light of test anxiety, fairness, bias, and cost and efficiency in the three testing modes. On the other hand, Ding (2017) compared the holistic and propositional analysis methods in assessing interpreting quality. She used the two methods in analyzing the recorded interpreting outputs produced by a group of undergraduate students on an in-class task. Two instructors assessed the transcribed interpreting outputs holistically by examining the semantic content features (accuracy, terminological adequacy, coherence, clarity, and completeness), and the linguistic performance ones (grammatical correctness, targetlanguage norms, fluency, and stylistic adequacy). As for the propositional analysis, an independent assessor compared the propositions (i.e., predicates, connectives, and modifications) in both the source and interpreted texts to identify the terminological adequacy, fluency, and accuracy aspects in the target interpreted texts. Ding analyzed the correlations between the scoring of the two assessment methods, and discussed the merits offered by each.

4.7 Developing Translation/Interpreting Motivational Scales

73

4.7 Developing Translation/Interpreting Motivational Scales Another research area of translation/interpreting assessment is concerned with measuring translator and interpreter motivational constructs. In the writing area which is comparable to translation, many motivation constructs have been researched, including writing apprehension, anxiety, attitude, self-efficacy, self-concept, value expectancy, and learning goals (for a review, see Abdel Latif 2019). With regard to translation/interpreting, the only motivational constructs that seem to have been researched are translator and interpreter self-ability beliefs only (i.e., their selfefficacy and self-concept). Translation/interpreting self-efficacy refers to translator/interpreter perceived ability to perform particular skills or tasks. As for translation/interpreting self-concept, it means translator/interpreter beliefs about their translation/interpreting ability in general. Kiraly (1990) defines translator self-concept as one’s ‘appraisal of his or her competency for translating a particular text’ (p. 100). Thus, unlike translator/interpreter self-efficacy, their self-concept is not a taskspecific construct. The importance of these two motivational constructs lies in their potential association with the persistence and efforts translators/interpreters devote to the target task. A very few translation/interpreting self-efficacy and self-concept scales have been developed. Stansfield et al. (1992) developed an early translation ability selfassessment scale, which asks respondents to rate their ability to perform particular Spanish–English translation tasks using a 4-point Likert format (limited, functional, competent, and superior). The translation tasks given in the scale include: newspaper articles and editorials, police reports, legal documents, scientific articles and training manuals. Recently, Bolaños-Medina and Núñez (2018) developed a translators’ selfefficacy scale and validated it though determining its psychometric properties. Based on 74 undergraduate students’ responses to the questionnaire, they conducted a factor structure of the scale which includes five subscales and 20 items in its final form. The two scales developed by S. Lee (2014) and Bravo and Aguirre (2019) are examples of interpreting self-ability measures. S. Lee (2014) reported a study on the development of an interpreting self-efficacy scale designed for undergraduate trainees in Korea. The scale was developed based on a statistical format. The initial version of the scale comprised 63 items, which were generated in light of reviewing the relevant literature. Lee checked the draft scale for face and content validity. The scale was validated by getting 413 participants to complete it. The internal consistency and exploratory factor analyses resulted in refining the scale to 21 items. On the other hand, Bravo and Aguirre (2019) validated a metacognition interpreting self-perception measure that assesses interpreters’ self-concept and self-regulation perceptions. The measure, which was experimented with 199 interpreting trainees, was developed based on reviewing relevant literature from the fields of interpreting, education, and educational psychology. Based on the factor analysis they conducted, Bravo and Aguirre found that the items of the measure assess four dimensions:

74

4 Translation and Interpreting Assessment Research

interpreting self-knowledge perceptions, consolidation of interpreter’s set of criteria, development of an interpreting macro-strategy, and task-focused interpreting flow. As shown above, the scarce research conducted so far on translator/interpreter selfability beliefs has not resulted in characterizing the dimensions of these motivational constructs. Meanwhile, no research seems to have been conducted on the attitudinal, situational, or goal constructs of translation or interpreting motivation (e.g., translation/interpreting anxiety, and translation/interpreting learning goals). Addressing these multiple gaps requires much extensive future research on translation or interpreting motivation. Three main research sources can enrich future translation or interpreting motivation research; these include research and literature on writing motivation, oral communication motivation, and language learning motivation.

4.8 Investigating User Evaluation/Reception User evaluation and reception research focuses on assessing the quality of translated and interpreted practices or products from the perspectives of users or audience. As implied in the name, the participants in this research are not professional raters or trainers but the audience representative of those receiving the translation/interpreting service. This research is based on the assumption that users may receive translation and interpreting services in a way different from professionals or service providers (Gile 1989). As Kotler and Armstrong (1994) state, ‘quality must begin with customer needs and end with customer perception’ (p. 568). Therefore, this research has important implications for translator and interpreter training as it can raise trainees’ awareness of the criteria meeting translation/interpreting user expectations. Thus, they can consider these criteria or requirements when performing similar tasks in their future workplaces. User perceptions have perhaps been researched more heavily than any other translation/interpreting assessment area. Though user evaluation/reception has been researched with regard to translation, interpreting, audiovisual translation, and sign language interpreting, these fields have received varied research attention. Literature indicates that user evaluation research originated in the interpreting field (for a historical review, see Kurz 2001). Specifically, early published interpreting user evaluation research occurred in the late 1980s and early 1990s (e.g., Kurz 1989, 1993; Gile 1990; Ng 1992; Marrone 1993). In these early studies, questionnaires were the most dominant data source (Kurz 2001). Some of the user evaluation interpreting studies published in the past two decades also made use of questionnaires. For example, Russo (2005) investigated the evaluation and preferences of simultaneous film interpreting by users. She used a questionnaire with users watching films, which were interpreted by professionals and students. Christensen (2011) also explored users’ perceptions of court interpreting by depending on a questionnaire for collecting data about respondents’ expectations about the interpreter’s role, obligations and performance. Reithofer (2013) also used a questionnaire to compare how the audience perceives the quality of interpreting versus English as a lingua franca. In her study,

4.8 Investigating User Evaluation/Reception

75

139 listeners evaluated the quality of a 15-min marketing-related speech delivered by an Italian non-native speaker of English and its German interpretation. Reithofer’s comparison of the effect of the two modes of communication showed that the listeners preferred the interpreting mode due to its better cognitive end-result. Some other user evaluation interpreting studies have made use of interviews and audio-recoded comments. For example, Edwards et al. (2005) interviewed people from Chinese, Kurdish, Bangladeshi, Indian, and Polish ethnic minorities living in Manchester and London about their perceptions of the qualities of a good interpreter, and their views on and experiences with using professional and family/friend interpreters. On the other hand, Bartłomiejczyk (2017) explored how the European Parliament members and other European Union officials perceive the performance of simultaneous interpreters. She identified and analyzed 230 references or comments made by the speakers on the interpreters or their interpreting output in a corpus of the plenary sessions held in the European Parliament during an 8-year period (2005– 2012). Bartłomiejczyk’s analysis of the speakers’ relevant references included the following thematic units: appreciating interpreters’ efforts, raising doubts regarding interpretations, reminding speakers about interpreting practical constraints, raising criticism, highlighting the difficulty of interpreting specific items, and giving apologies. Bartłomiejczyk provided quantitative descriptions of these thematic units along with examplary comments and references representing each. The third group of user evaluation interpreting studies have adopted a correlational approach by examining how reception may be influenced by some phonological, linguistic, and discoursal features in the interpreted outputs such as accents, linguistic reformulation, and reported speech use. For example, Setton and Motta (2007) studied the determinants of interpreting quality by investigating the predictors of users’ quality judgments. They correlated international organization users’ independent evaluation of 24 simultaneous interpreting transcripts with the linguistic reformulation and elaboration, accuracy, style, and fluency in the interpreted outputs. Hale et al. (2011) also evaluated the influence of accented and non-accented interpreting on users’ ratings of witnesses’ testimony. Relying on questionnaire data, Cheung (2015) confirmed the hypothesis that listeners could scapegoat the interpreter with a non-native accent for their unsatisfactory comprehension scores. In another study, Cheung (2014) looked at the perceptions of court interpreter’s neutrality when using reported rather than direct speech. Cheung’s study relied upon mock trial where the participants in the two experimental groups received interpretations with regular switches to two reported speech types: (a) the pronoun group exposed to interpreting with the third person pronouns (e.g., ‘he said’); and (b) the title group exposed to interpreting with professional titles (e.g., ‘the judge said’) (p. 191). Following this, the participants in the two groups completed a 5-point Likert scale questionnaire for evaluating the interpreters’ neutrality and alignment. Becerra (2016) explored the issue of how the first impression formed by users about the simultaneous interpreters during the first listen may influence their final assessment of the interpretation. She used the interpretations of six European Parliament speeches to identify the role of the first impression during the quality assessment. Becerra used four questionnaires

76

4 Translation and Interpreting Assessment Research

to collect data about users’ previous expectations, their evaluation of the interpreter performance, their recognition of the interpreter in the second performance, and their views on the role of the first impressions on the way they assessed the interpretations. In her data analysis, Becerra particularly focused on the role of the verbal and nonverbal aspects on forming a first impression of interpreter performance. On the other hand, user evaluation translation studies are generally few. Many of these few studies have focused on comparing user evaluation of humantranslated versus post-edited machine-translated texts. For example, Bowker and Ehgoetz (2007) and Bowker (2009) surveyed the reception of rapidly and maximally post-edited machine-translated versus human-translated texts among the English and French linguistic communities in Canada. In a later study, Bowker and Ciro (2015) also investigated user perceptions of translated portions of library’s websites (a professional human translation, a maximally post-edited machine translation, a rapidly post-edited machine translation, and a raw machine translation) (p. 165). These three studies depended on surveys completed by a large number of respondents. In Screen’s (2019) study, end-users’ perceptions of the quality of professional translation versus post-edited machine translation were assessed by combining eyetracking data with the participants’ ratings. The eye-movements of two groups of participants were recorded while one group read the human-translated texts and the other group read the post-edited machine-translated one (n = 6 participants in each group). Following the eye-tracking reading task, the 12 participants were asked to rate the readability and comprehensibility of the translated text they read using a 5-point Likert scale. With regard to the user evaluation/reception audiovisual translation studies, these are characterized by using varied data sources including questionnaires, retrospective semi-structured interviews, and eye-tracking. Gambier (2006) differentiates among three types of reception in audiovisual translation studies. These are: cognitive reactions assessed normally by eye-tracking data; responses measured by interviews and questionnaires; and repercussions operationalized in terms of users’ attitude toward subtitled materials. It is noted also that most user evaluation/reception audiovisual translation studies have been conducted on subtitling rather than dubbing. One of the few dubbing reception studies was reported by Fernández-Torné and Matamala (2015), who involved blind and partially sighted people in assessing two synthetic voices and two natural films dubbed into Catalan. In this study, 67 participants completed a 9-item questionnaire for evaluating the four voices they listened to. The questionnaire assessed the following features in the four dubbing performances: overall impression, accentuation, listening effort, speech pauses, intonation, naturalness, pleasantness, and acceptance. Reception subtitling studies can be classified into two main categories: user preference studies and cognitive processing ones. User preference studies have focused on exploring the preferred specifications of good subtitles as perceived by audience. For example, Mangiron (2016) conducted a small-scale study on user reception of subtitles in video games. In this study, Mangiron collected the data using eye-tracking technology and a questionnaire to examine the types of subtitles most suitable for video games in light of users’ reported preferences. Manchón and Orero (2018) also

4.8 Investigating User Evaluation/Reception

77

tried to identify subtitle setting preferences (i.e., subtitle position, box, and size) of end-users with two age categories. On the other hand, Aleksandrowicz (2019) conducted a study on the reception of subtitled song lyrics. He tested audience reception of 88 songs subtitled into Polish, and collected 209 surveys which were used to collect data about their satisfaction with the rhymes in the subtitled songs. Most reception cognitive processing subtitling studies have made use of eyetracking to examine user cognitive efforts while reading subtitles. For example, Orrego-Carmona (2016) conducted a reception study of non-professional and professional subtitling. The sample of this study consisted of 52 Spanish and Catalan native speaker participants with two different English proficiency levels (26 with a low English proficiency level, and 26 with a high level). They were shown three subtitles videos, each of about three minutes long. The subtitles shown were a professional version and two non-professional ones. Orrego-Carmona collected his data via using questionnaires, eye tracking, and interviews. He analyzed this triangulated data under the following categories: reception capacity, subtitle-reading effort, self-reported comprehension, attention allocated to the image area, number of the subtitles skipped, and attention shift. Zheng and Xie (2018) also explored how the inclusion of explanatory captions in a subtitled video is received by viewers of varied educational backgrounds. They combined questionnaire and retrospective interview data with the insights obtained from eye-tracking. In their data analysis, Zheng and Xie focused on identifying the impact of including explanatory captions on viewers’ cognitive processing and the time spent on reading the captions and subtitles and on viewing images. Another cognitive processing study was conducted by Szarkowska and Gerber-Morón (2019), who compared viewers’ processing of threeline and two-line subtitles. The participants of their study, who were 74 normalhearing, hard-of-hearing and deaf viewers, watched a two-line subtitle video and another one with three-line subtitles. Szarkowska and Gerber-Morón combined semistructured interview and eye-tracking data to examine the effect of the two- and three-line subtitle videos on the viewers’ enjoyment, preferences, comprehension and processing cognitive load. A very few studies have explored user evaluation/reception issues in the sign language interpreting context. As may be expected, these studies depended mainly on surveys. For example, Xiao and Li (2013) surveyed the reception of sign language interpreted television programmes in the Chinese d/Deaf community. They collected their data using a questionnaire and interviews. The questionnaire was completed in printed and electronic forms by 336 respondents from 104 Chinese cities, and it probed their reception and comprehension of these programmes. Specifically, the questionnaire includes 11 questions concerned with respondents’ biodata profiles, their reception and comprehension of the programmes, their reasons for watching or not watching them, and the perceived quality of these programmes. As for the interviews, these were conducted with 18 sign language interpreters with experience in interpreting these televised programmes, and they focused on their explanations of the issues raised in questionnaire questions. In another study, De Wit and Sluis (2014) examined deaf user evaluation of sign language interpreters in the Netherlands. They collected their data using an online survey and a paper one that was based

78

4 Translation and Interpreting Assessment Research

on real-life settings. The online survey asks respondents about the criteria they use for evaluating the quality of interpreting service, and their use of interpreters, and selection of interpreter training. As for the second survey, it was administered in four live interpreting situations and it asks about respondents’ evaluation of interpreters and their interpersonal skills in these specific situations, and the relationship between users and interpreters. A hundred and 90 deaf sign language users responded to the online questionnaire, whereas 70 ones completed the real live setting one. Methodologically speaking, De Wit and Sluis referred to some limitations of the two surveys they used. According to them, the main limitation of the online questionnaire lies in the lack of technology accessibility for some user groups, whereas the live situation questionnaire is limited in researchers’ inability to control variables.

4.9 Conclusion As has been shown in the previous sections, the translation and interpreting assessment research conducted so far has dealt with seven main areas. These are: surveying assessment practices, validating and developing translation/interpreting tests, identifying the difficulty level of the source text, developing performance assessment rubrics, examining rating practices and testing conditions, developing translation/interpreting motivational scales, and investigating user evaluation/reception. Table 4.1 provides a summary of the issues covered in these seven areas. As noted in the above sections and also in the table, the place of translation and interpreting studies varies from one assessment research area to another. For example, there is much more interpreting than translation research on aptitude/admission and accreditation testing, and user evaluation/reception. Contrarily, there is more translation than interpreting research addressing the difficulty level of the source text. However, the overall picture obtained from the studies review is that more issues have been researched in assessing interpreting than translation. Overall, the above research review implies that translation and interpreting assessment research is still evolving. In the past three decades, great developments have been made in the broader area of language testing research due to the existence of international journals publishing the studies relevant to it. These journals include: Assessing Writing, Language Assessment Quarterly, and Language Testing. Therefore, launching journals specialized in assessing translation and interpreting will be key to advancing this translator/interpreter education research type. Meanwhile, translation and interpreting assessment research could be fostered through creating specialized research centers in higher education institutions.

4.9 Conclusion

79

Table 4.1 Overview of the research areas and issues in translation/interpreting assessment studies Research area

Main issues researched so far

Country-specific translation and interpreting assessment practices

• Cross-cultural admission testing practices • Interpreting accreditation testing • Translation testing practices

Translation/interpreting test validation

• Interpreting aptitude/admission test validation • Interpreting accreditation and competence test validation • Research instrument back-translation validation

Identifying the difficulty level of the source text

• Text characteristics associated with the perceived translation task difficulty • Readability as a predictor of source text difficulty • Translation source text complexity • Interpreting source text difficulty

Developing performance assessment rubrics

• Developing translation quality rubrics • Developing interpreting assessment rubrics

Examining rating practices and testing conditions

• Inter-rater reliability • Performance features influencing raters’ evaluation • Raters’ evaluation processes • Rating performance in different testing modes

Developing translation/interpreting motivational scales

• Translation ability self-assessment • Interpreting ability self-assessment

Investigating user evaluation/reception

• Interpreting user reception • Interpreting performances aspects associated with user evaluation • User evaluation of human versus posted-edited machine translation • User preferences of audio-visual translation • User cognitive processing of subtitling • Sign language interpreting user evaluation

References Abdel Latif, M.M.M. 2019. Unresolved issues in defining and assessing writing motivational constructs: A review of conceptualization and measurement perspectives. Assessing Writing 42. http://doi.org/10.1016/j.asw.2019.100417. Aleksandrowicz, P. 2019. Subtitling song lyrics in films: Pilot reception research. Across Languages and Cultures 20 (2): 173–195. https://doi.org/10.1556/084.2019.20.2.2. Angelelli, C. 2007. Assessing medical interpreters: The language and interpreting testing project. The Translator 13 (1): 63–82. https://doi.org/10.1080/13556509.2007.10799229. Angelelli, C.V. 2009. Using a rubric to assess translation ability: Defining the construct. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 13–48. Amsterdam: John Benjamins Publishing Company.

80

4 Translation and Interpreting Assessment Research

Bachman, L., and Adrian Palmer. 1996. Language testing in practice. Oxford: Oxford University Press. Bahameed, A. 2016. Applying assessment holistic method to the translation exam in Yemen. Babel 62 (1): 135–149. https://doi.org/10.1075/babel.62.1.08bah. Barik, H.C. 1971. A description of various types of omissions, additions and errors of translation encountered in simultaneous interpretation. Meta 16 (4): 199–210. https://doi.org/10.7202/001 972ar. Bartłomiejczyk, M. 2017. The interpreter’s visibility in the European Parliament. Interpreting 19 (2): 159–185. https://doi.org/10.1075/intp.19.2.01bar. Becerra, O.G. 2016. Do first impressions matter? The effect of first impressions on the assessment of the quality of simultaneous interpreting. Across Languages and Cultures 17 (1): 77–98. https:// doi.org/10.1556/084.2016.17.1.4. Bolaños-Medina, A., and J.L. Núñez. 2018. A preliminary scale for assessing translators’ selfefficacy. Across Languages and Cultures 19 (1): 53–78. https://doi.org/10.1556/084.2018.19.1.3. Bontempo, K., and J. Napier. 2009. Getting it right from the start: Program admission testing of signed language interpreters. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 247–296. Amsterdam: John Benjamins Publishing Company. Bowker, L. 2009. Can machine translation meet the needs of official language minority communities in Canada? A recipient evaluation. Linguistica Antverpiensia 8: 123–155. Bowker, L., and J.B. Ciro. 2015. Investigating the usefulness of machine translation for newcomers at the public library. Translation and Interpreting Studies 10: 165–186. Bowker, L., and M. Ehgoetz. 2007. Exploring user acceptance of machine translation output: A recipient evaluation. In Across boundaries: International perspectives on translation, ed. D. Kenny and K. Ryou, 209–224. Newcastleupon-Tyne: Cambridge Scholars Publishing. Bravo, F., and E. Aguirre. 2019. Metacognitive self-perception in interpreting. Translation, Cognition & Behavior 2 (2): 147–164. https://doi.org/10.1075/tcb.00025.fer. Brislin, R.W. 1970. Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology 1: 185–216. Campbell, S., and S. Hale. 1999. What makes a text difficult to translate? Refereed Proceedings of the 23rd Annual ALAA Congress. http://www.cltr.uq.edu.au/alaa/proceed/camphale.html. Cella, D., L. Hernandez, A.E. Bonomi, M. Corona, M. Vaquero, G. Shiomoto, and L. Baez. 1998. Spanish language translation and initial validation of the functional assessment of cancer therapy quality-of-life instrument. Medical Care 36: 1407–1418. Chabasse, C., and S. Kader. 2014. Putting interpreting admissions exams to the test: The MA KD Germersheim Project. Interpreting 16 (1): 19–33. https://doi.org/10.1075/intp.16.1.02cha. Chen, J. 2009. Authenticity in Accreditation Tests for Interpreters in China. The Interpreter and Translator Trainer 3 (2): 257–273. https://doi.org/10.1080/1750399X.2009.10798791. Cheung, A.K. 2014. The use of reported speech and the perceived neutrality of court interpreters. Interpreting 16 (2): 191–208. https://doi.org/10.1075/intp.16.2.03che. Cheung, A.K. 2015. Scapegoating the interpreter for listeners’ dissatisfaction with their level of understanding: An experimental study. Interpreting 17 (1): 46–63. https://doi.org/10.1075/intp. 17.1.03che. Christensen, T. 2011. User expectations and evaluation: A case study of a court interpreting event. Perspectives: Studies in Translatology 19 (1): 1–24. https://doi.org/10.1080/090767610 03728554. Colina, S. 2008. Translation quality evaluation: Empirical evidence for a functionalist approach. The Translator 14 (1): 97–134. https://doi.org/10.1080/13556509.2008.10799251. Colina, S. 2009. Further evidence for a functionalist approach to translation quality evaluation. Target 21 (2): 235–264. https://doi.org/10.1075/target.21.2.02col. Colina, S., N. Marrone, M. Ingram, and D. Sánchez. 2017. Translation quality assessment in health research: A functionalist alternative to back-translation. Evaluation and the Health Professions 40 (3): 267–293. https://doi.org/10.1177/0163278716648191.

References

81

Conde, T. 2012. The good guys and the bad guys: The behavior of lenient and demanding translation evaluators. Meta 57 (3): 763–786. https://doi.org/10.7202/1017090a. De Wit, M., and I. Sluis. 2014. Sign language interpreter quality: The perspective of deaf sign language users in the Netherlands. The Interpreters’ Newsletter 19: 63–85. Ding, Y.L. 2017. Using propositional analysis to assess interpreting quality. International Journal of Interpreter Education 9 (1): 17–39. Edwards, R., B. Temple, and C. Alexander. 2005. Users’ experiences of interpreters: The critical role of trust. Interpreting 7 (1): 77–95. https://doi.org/10.1075/intp.7.1.05edw. Eremenco, S., D. Cella, and B.J. Arnold. 2005. A comprehensive method for the translation and cross-cultural validation of health status questionnaires. Evaluation and the Health Professions 28: 212–232. Eyckmans, J., P. Anckaert, and W. Segers. 2009. The perks of norm-referenced translation evaluation. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 73–94. Amsterdam: John Benjamins Publishing Company. Fernández-Torné, A., and A. Matamala. 2015. Text-to-speech vs. human voiced audio descriptions: A reception study in films dubbed into Catalan. The Journal of Specialised Translation 24: 61–88. Fulcher, G. 2003. Testing second language speakers. London: Pierson Longman Press. Gambier, Y. 2006. Multimodality and audiovisual translation. Paper presented at Marie Curie Euroconferences MuTra (May 1–5) Copenhagen. Garrett, B., and E.G. Girardin. 2019. American sign language competency: Comparing student readiness for entry into a four-year interpreter degree program. International Journal of Interpreter Education 11 (1): 20–32. Gile, D. 1989. La communication linguistique en réunion multilingue. Les difficultés de la transmission informationnelle en interprétation simultanée, thèse, Université Paris III. Gile, D. 1990. L’évaluation de la qualité de l’interprétation par les délégués: une étude de cas. The Interpreters’ Newsletter 3: 66–71. Guillemin, F., C. Bombardier, and D. Beaton. 1993. Cross-cultural adaptation of health-related quality of life measures: Literature review and proposed guidelines. Journal of Clinical Epidemiology 46: 1417–1432. Hale, S., and S. Campbell. 2002. The interaction between text difficulty and translation accuracy. Babel 48 (1): 14–33. https://doi.org/10.1075/babel.48.1.02hal. Hale, S., N. Bond, and J. Sutton. 2011. Interpreting accent in the courtroom. Target 23 (1): 48–61. https://doi.org/10.1075/target.23.1.03hal. Han, C. 2015. Investigating rater severity/leniency in interpreter performance testing: A multifaceted Rasch measurement approach. Interpreting 17 (2): 255–283. https://doi.org/10.1075/intp.17.2. 05han. Hlavac, J. 2015. Formalizing community interpreting standards: A cross-national comparison of testing systems, certification conventions and recent ISO guidelines. International Journal of Interpreter Education 7 (2): 21–38. Hlavac, J., M. Orlando, and S. Tobias. 2012. Intake tests for a short interpreter-training course: Design, implementation, feedback. International Journal of Interpreter Education 4 (2): 21–45. Jensen, K.T.H. 2009. Indicators of text complexity. Copenhagen Studies in Language 37: 61–80. Jacobson, H.E. 2009. Moving beyond words in assessing mediated interaction: Measuring interactional competence in healthcare settings. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 49–72. Amsterdam: John Benjamins Publishing Company. Kiraly, D. 1990. Toward a systematic approach to translation skills instruction. Ph.D. diss., University of Illinois. https://doi.org/10.1099/00221287-136-2-327. Kopczy´nski, A. 1980. Conference interpreting: Some linguistic and communicative problems. Pozna´n: Adam Mickiewicz University Press. Kotler, P., and G. Armstrong. (1994). Principles of Marketing, 6th ed. Englewood Cliffs (NJ), Prentice-Hall.

82

4 Translation and Interpreting Assessment Research

Kurz, I. 1989. Conference interpreting—User expectations, Coming of Age, in Proceedings of the 30th annual conference of the American translators association, ed. D.L. Hammond, 143–148. Medford (NJ), Learned Information. Kruz, I. 1993. Conference interpretation: Expectations of different user groups. The Interpreters’ Newsletter 5: 13–21. Kurz, I. (2001). Conference interpreting: Quality in the ears of the user. Meta 46 (2): 394–409. https://doi.org/10.7202/003364ar. Lee, J. 2008. Rating scales for interpreting performance assessment. The Interpreter and Translator Trainer 2 (2): 165–184. https://doi.org/10.1080/1750399X.2008.10798772. Lee, S. 2014. An interpreting self-efficacy (ISE) scale for undergraduate students majoring in consecutive interpreting: Construction and preliminary validation. The Interpreter and Translator Trainer 8 (2): 183–203. https://doi.org/10.1080/1750399X.2014.929372. Lee, S.-B. 2015. Developing an analytic scale for assessing undergraduate students’ consecutive interpreting performances. Interpreting: International Journal of Research and Practice in Interpreting 17 (2): 226–254. https://doi.org/10.1075/intp.17.2.04lee. Lee, S.-B. 2017. University students’ experience of ‘scale-referenced’ peer assessment for a consecutive interpreting examination. Assessment & Evaluation in Higher Education 42 (7): 1015–1029. Lee, S.-B. 2019. Holistic assessment of consecutive interpretation: How interpreter trainers rate student performances. Interpreting 21 (2): 245–269. https://doi.org/10.1075/intp.00029.lee. Li, D. 2006. Making translation testing more teaching-oriented: A case study of translation testing in China. Meta: Translators’ Journal 51 (1): 72–88. Liu, Minhua, and Yu-Hsien Chiu. 2009. Assessing source material difficulty for consecutive interpreting: Quantifiable measures and holistic judgment. Interpreting 11 (2): 244–266. https://doi. org/10.1075/intp.11.2.07liu. Lopez, Gomez, Maria Jose, Teresa Bajo Molina, Presentacion Padilla Benitez, and Julio Santiago de Torres. 2007. Predicting proficiency in signed language interpreting. Interpreting 9 (1): 71–93. Manchón, L.M., and P. Orero. 2018. Usability tests for personalised subtitles. Translation Spaces 7 (2): 263–284. https://doi.org/10.1075/ts.18016.man. Mangiron, C. 2016. Reception of game subtitles: An empirical study. The Translator 22 (1): 72–93. https://doi.org/10.1080/13556509.2015.1110000. Marrone, S. 1993. Quality: A shared objective. The Interpreters’ Newsletter 5: 35–41. Mertler, C.A. 2000. Designing scoring rubrics for your classroom. Practical Assessment, Research, and Evaluation 7 (25): 1–10. https://doi.org/10.7275/gcy8-0w24. Mulayim, S. 2012. A study of interpreting accreditation testing formats in Australia. International Journal of Interpreter Education 4: 39–51. Ng, B. C. 1992. End users’ subjective reaction to the performance of student interpreters. The Interpreters’ Newsletter 35–41. Orlando, M. 2011. Evaluation of translations in the training of professional translators: At the crossroads between theoretical, professional and pedagogical practices. The Interpreter and Translator Trainer 5 (2): 293–308. https://doi.org/10.1080/13556509.2011.10798822. Orrego-Carmona, D. 2016. A reception study on non-professional subtitling: do audiences notice any difference? Across Languages and Cultures 17 (2): 163–181. https://doi.org/10.1556/084. 2016.17.2.2. Phelan, M. 2017. Analytical assessment of legal translation: A case study using the American Translators Association framework. The Journal of Specialised Translation 27: 189–210. Phongphanngam, P., and H.W. Lach. 2019. Cross-cultural instrument translation and adaptation: Challenges and strategies. Pacific Rim International Journal of Nursing Research 23 (2): 170–179. Pöchhacker, F. 2009. Testing aptitude for interpreting: The SynCloze test. Presentation at the Symposium on Aptitude for Interpreting: Towards Reliable Admission Testing. Antwerp, Belgium: Lessius University College.

References

83

Presas, M. 2012. Training translators in the european higher education area: A model for evaluating learning outcomes. The Interpreter and Translator Trainer 6 (2): 139–169. https://doi.org/10. 1080/13556509.2012.10798834. Reithofer, K. 2013. Comparing modes of communication: The effect of English as A Lingua Franca Vs. Interpreting. Interpreting 15 (1): 48–73. https://doi.org/10.1075/intp.15.1.03rei. Russell, D., and K. Malcolm. 2009. Assessing ASL-English interpreters: The Canadian model of national certification. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 331–376. Amsterdam: John Benjamins Publishing Company. Russo, M. 2005. Simultaneous fil Timarová, Š. and H. Ungoed-Thomas. 2008. Admission testing for interpreting courses. The Interpreter and Translator Trainer 2 (1): 29–46. https://doi.org/10. 1080/1750399x.2008.1079876. Sawyer, D.B. 2004. Fundamental aspects of interpreter education: Curriculum and assessment. Amsterdam/Philadelphia: John Benjamins. Seal, Brenda C. 2004. Psychological testing of sign language interpreters. Journal of Deaf Studies & Deaf Education 9 (1): 39–52. https://doi.org/10.1093/deafed/enh010. Screen, B. 2019. What effect does post-editing have on the translation product from an end-user’s perspective? The Journal of Specialised Translation 31: 133–157. Séguinot, C. 1989. Understanding why translators make mistakes. TTR 2–2: 73–102. Setton, R., and Motta, M. 2007. Syntacrobatics: Quality and reformulation in simultaneous- with -text. Interpreting 9 (2): 199–230. Stansfield, C.W., M.L. Scott, and D.M. Kenyon. 1992. The measurement of translation ability. The Modern Language Journal 76 (4): 455–467. https://doi.org/10.1111/j.1540-4781.1992.tb0 5393.x. Sun, S., and G.M. Shreve. (2014). Measuring translation difficulty: An empirical study. Target 26 (1): 98–127. https://doi.org/10.1075/target.26.1.04sun. Szarkowska, A., and O. Gerber-Morón. 2019. Two or three lines: A mixed-methods study on subtitle processing and preferences. Perspectives 27 (1): 144–164. https://doi.org/10.1080/0907676X. 2018.1520267. Timarová, S., and H. Ungoed-Thomas. 2008 Admission testing for interpreting courses. The Interpreter and Translator Trainer, 2 (1): 29–46. https://doi.org/10.1080/1750399X.2008.107 98765. Timarová, S., and H. Ungoed-Thomas. 2009. The predictive validity of admission tests for conference interpreting courses in Europe: A case study. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 225–246. Amsterdam: John Benjamins Publishing Company. Tyupa, S. 2011. A theoretical framework for back-translation as a quality assessment tool. New Voices in Translation Studies 7: 35–46. Vermeiren, H., J. Van Gucht, and L. De Bontridder. 2009. Standards as critical success factors in assessment: Certifying social interpreters in Flanders, Belgium. In Testing and assessment in translation and interpreting studies: A call for dialogue between research and practice, ed. C.V. Angelelli and H.E. Jacobson, 297–330. Amsterdam: John Benjamins Publishing Company. Waddington, C. 2001. Different methods of evaluating student translations: The question of validity. Meta: Translators’ Journal 46: 311–325. Weinstein, B.E., D. Rasheedy, H.M. Taha, and F.N. Fatouh. 2015. Cross-cultural adaptation of an Arabic version of the 10-item hearing handicap inventory. International Journal of Audiology 54: 341–346. Wu, Z. 2019. Text characteristics, perceived difficulty and task performance in sight translation: An exploratory study of university-level students. Interpreting 21 (2): 196–219. https://doi.org/ 10.1075/intp.00027.wu. Xiao, X., and F. Li. 2013. Sign Language Interpreting on Chinese TV: A survey on user perspectives. Perspectives: Studies in Translatology 21 (1): 100–116. https://doi.org/10.1080/0907676x.2011. 632690.

84

4 Translation and Interpreting Assessment Research

Yu, W., and V.J. van Heuven. 2017. Predicting judged fluency of consecutive interpreting from acoustic measures: Potential for automatic assessment and pedagogic implications. Interpreting 19 (1): 47–68. https://doi.org/10.1075/intp.19.1.03yu. Zamanian, M., and Heydari, P. 2012. Read-ability of texts: State of the art. Theory and Practice in Language Studies 2 (1): 43–53. Zhang, W., and Z. Song. 2019. The effect of self-repair on judged quality of consecutive interpreting: Attending to content, form and delivery. International Journal of Interpreter Education 11 (1): 4–19. Zheng, B., and M. Xie. 2018. The effect of explanatory captions on the reception of foreign audiovisual products: A study drawing on eyetracking data and retrospective interviews. Translation, Cognition & Behavior 1 (1): 119–146. https://doi.org/10.1075/tcb.00006.zhe.

Chapter 5

Translation/Interpreting Cognitive Process Research

Abstract In this chapter, the author discusses the issues related to researching translator and interpreter cognitive processes. The chapter starts with an overview of the methodological research orientations in translation/interpreting process studies. This is followed by reviewing the main areas and issues dominant in the translation/interpreting process research published so far. First, the author highlights the studies adopting a macro approach to researching the translation process, and the ones adopting a micro one related to examining translator use of resources, translation revision, and translation problem-solving. Second, the author discusses two main trends in interpreting process studies: profiling interpreting strategies, and researching a particular interpreting strategy type (e.g., note-taking, self-repair, and anticipation). In discussing each research area, the author provides examples showing how researchers have collected and analyzed translator and interpreter cognitive process data. Keywords Translation research · Interpreting research · Translation process · Interpreting strategies · Cognitive translatology · Think-aloud protocols · Eye-tracking · Keystroke logging · Retrospective interviews · Machine translation

5.1 Introduction: Methodological Approaches Studying translation and interpreting cognitive or mental processes is perhaps the most complicated translator and interpreter education research area. This complexity stems from the nature performing translation and interpreting tasks and the many cognitive operations used in them, the nature of the translation and interpreting process data, and the difficulties involved in its analysis. That is why we can note that the word ‘complex’ always occurs in researchers’ descriptions of translator or interpreter cognitive processes. Talking about the interpreting process, Pöchhacker (2004), for instance, states that ‘as a goal-directed activity, interpreting has been conceptualized as an essentially “strategic” process, particularly by researchers viewing it as a complex cognitive information-processing task or text-processing skill’ (p. 132). Likewise, Hurtado and Alves (2009) refer to the complex aspects of studying the translation process: © Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_5

85

86

5 Translation/Interpreting Cognitive Process Research The analysis of the translation process entails a great deal of complexity. It is constrained by intrinsic difficulties inherent in studies which aim at tapping into any kind of cognitive processing: it is not amenable to direct observation. Furthermore, the difficulties related to the investigation of the translation process are magnified by the different phases through which the process unfolds and by the complexity of the interwoven abilities and forms of specialized knowledge which play an integral part in it. (p. 54)

When trying to identify translator or interpreter cognitive strategies/processes in a particular data set, researchers rely mainly on inferring these strategies based on either the grounded theory approach or on some existing frameworks. It is not surprising, therefore, to find many discrepancies in the literature profiling translator or interpreter cognitive strategies. Historically, interpreting process research occurred before its translation counterpart. Early translation process research occurred only in the second half of the 1980s (Gerloff 1988; Krings 1986) as compared to the seminal interpreting process research which appeared in the late 1960s (Barik 1969; Kade 1968). Contrarily, much more translation process research seems to have been published than interpreting process research in the past three decades. Both research types have been influenced by the research conducted earlier in close fields. With regard to interpreting process research, Pöchhacker (2004) points out that ‘most [interpreting] process-oriented research draws on insights and methods from the cognitive sciences and focuses on spoken-language conference interpreting in the simultaneous mode’ (p. 113). Likewise, O’Brien (2015) refers to the origins of cognitive translatology (i.e., the study of translator cognitive processes) and the way it has borrowed from other related research fields: A broad sweep of the published research on cognitive translatology rapidly reveals that research has been influenced and inspired by a variety of disciplines, some of which are closely related to translation studies, others of which are more distant. Influence from disciplines such as linguistics, psychology, neuroscience, cognitive science, reading and writing research and language technology is clearly apparent. Within each of these disciplines, specific sub-disciplines have exercised particular influence. (p. 6)

Process research in both the translation and interpreting fields has important implications for translator and interpreter training. First, it can inform us about the difficulties encountered in translating the text or interpreting it, and the correlates of such difficulties. Second, it provides trainers with rich insights about the strategies or processes used by efficient translator and interpreters, and therefore they can tailor their training in a way helping their trainees of such effective strategies. According to Li (2013), the repeated successful implementation of cognitive processes leads at the end to using them automatically, and this helps in optimizing the processing load while performing translation and interpreting tasks. Several data sources have been used in previous translation and process research. In translation process studies, the think-aloud method, retrospective interviews and questionnaires, computer- keystroke logging, eye-tracking, and computer screen digital recording have been used extensively. Translated text revision analysis, direct observation, and reflective essays have also been used in studying translator

5.1 Introduction: Methodological Approaches

87

processes, but in very scarce studies. As for interpreting process research, it has depended on the audio- or audio- and video-recorded interpreting data, retrospective interviews (and questionnaires in scarce cases), interpreter notes, and eye-tracking technologies. The think-aloud method has been one of the most commonly used methods for collecting data in translation process research since its beginning. When using this method, researchers ask participants to verbalize everything that comes to their minds while performing the translation task; these verbalizations are recorded, then transcribed and analyzed at a later stage. The data collected from translators’ concurrent verbalizations as well as the texts translated by them are named as the think-aloud protocols. By analyzing these protocols, researchers infer the strategies used by translators while performing the target task. For more information about the use of the think-aloud method in cognitive research in general and translation studies, in particular, see Ericsson and Simon (1993), Bernardini (2001), Li (2004), Hansen (2006), and Abdel Latif (2019a). Computer-keystroke logging is another data source used only in translation research. Using computer-keystroke logging involves observing and analyzing the online translation process through recording computer screen activities, i.e., the keyboard presses and cursor movements, scrolling, the timing of each movement, and pauses between these movements. As will be seen in the following section, translation researchers analyze the logged or keystroke logging data in various ways, and they can also replay it when conducting retrospective interviews with participants to help them recall their thoughts about the processes they have used in the translation task performed. Some translation studies have also used camera software for recording the computer screen activities during the translation task. This screen recording is normally used for supplementing some other data type or for stimulating translators’ thoughts in retrospective interview sessions. Retrospective questionnaires are sometimes used in translation process research. A translation process questionnaire is regarded as a retrospective data source that normally includes a number of items tapping the strategies used in the different stages of performing the translation task. In some scarce interpreting studies (e.g., Arumí Ribas 2012), retrospective questionnaires have been used as a data source of interpreter cognitive problems and strategies. In the very majority of the interpreting process studies, the discourse analysis of the audio- or audio- and video-recorded data has been the only research method. Prior to analyzing the interpreting data using the discourse analysis approach, researchers have to transcribe it. At the analysis stage which may be labeled as the discoursebased mental modeling (Kalina 1998a), researchers read the data several times to infer the strategies used by interpreters. In some cases, this discourse analysis process is guided by a framework of interpreting strategies identified either through literature review or a previous data collection piloting stage. In other cases, researchers can start from scratch in analyzing the data and depend on the grounded theory approach. Like the case of using computer-keystroke logging data to stimulate translator retrospctive accounts in translation process research, interpreting process researchers have also relied on the audio- or audio- and video-recorded data or its transcript in the retrospective interviews conducted with interpreters. As for collecting note-taking data,

88

5 Translation/Interpreting Cognitive Process Research

this has been a popular technique in consecutive interpreting studies in particular. To examine interpreter note-taking strategies, some researchers have gathered their written notes, while others have video-taped these notes or recorded them digitally (through tablet digital pen recording). In analyzing interpreter notes, researchers have used different approaches, for more details see a related part in Sect. 5.3.2. Eye-tracking technologies and retrospective interviews are used in both translation and interpreting process research. Eye-tracking is a procedure for recording participants’ eye-movements and fixations so as to get insights into their cognitive processes when looking at a particular content. In the past few years, eye-tracking has gained much ground in translation process studies, see the relevant reviews written by Hvelplund (2017a) and Abdel Latif (2019b). As for using eye-tracking in interpreting process research, it has been used in a few studies and its use is limited to the studies focusing on the interpreter visual processing, specially in sight translation interpreting (e.g., Chmiel and Lijewska 2019; Su 2020). When using retrospective interviews in translation or interpreting process studies, participants are asked some questions to stimulate their retrospective accounts about the task performed. In retrospective interview sessions, researchers usually use some performance data (e.g., the text translated, audio-recorded interpreting data, computer-keystroke logs, or recorded computer screen activities) to help participants recall these translation or interpreting processes more effectively. Optimally, the questions of the retrospective interviews should focus on the why of translator and interpreter strategies rather than the how of them. According to Tomlinson (1984), the issues of immediacy and contextual specificity should be considered in retrospective interviews; and this can be accomplished by conducting them immediately after participants complete the task, asking them task-specific questions, and using performance data for stimulating the discussion of strategies. Englund Dimitrova and Tiselius (2009, 2014) discussed the use of retrospection in translation and interpreting research. According to them: Retrospection refers to an interview that takes place immediately after the task and where the only cue is a transcript of the original speech/text. From this cue, the participant reports about everything s/he remembers from the process. It is thus not cued by any questions from the interview leader or by the participant’s own production. (Englund Dimitrova and Tiselius 2014, p. 180)

In their (2014) study, Englund Dimitrova and Tiselius combined retrospective interviews with the computer-keystroke logging data of the translated texts and the recorded interpreting performances. Their retrospective interviews with the participants focused on the problem indicators noted in their translation and interpreting recorded data. For the translation process data, these problems were: pausing within a sentence and revisions. With regard to the interpreting process data, the problems included: unfilled pauses, paralinguistic oral features, and speech disfluencies (e.g., repairs and false starts). Dong et al. (2019) also used retrospective interviews in their consecutive interpreting process study in the following way: In the retrospection, they [the participants] were required to report only what really happened during the CI [consecutive interpreting] test. Immediately after the retrospection, the interview session started. The interview was semi-structured, helping the participants recall more

5.1 Introduction: Methodological Approaches

89

details of their interpreting process, particularly their rationale for using a certain strategy. Before the retrospection and interview, participants were briefed on the purpose of this session and were informed that what they said would be recorded. During the retrospection and interview, participants were free to control the computer, being allowed to play (back) or stop records of their own interpreting output (together with the SL input) (p. 413)

All of the above data sources have their limitations. Therefore, data triangulation is the optimal solution for overcoming these limitations and for providing a clearer picture of the translation or interpreting processes investigated. As will be noted in the studies highlighted below, data triangulation has been much more commonly implemented in the translation process studies than in the interpreting ones. In some cases, participants are not expected to provide researchers with particular translation process data. Professional translators, for instance, will likely refuse to take part in a think-aloud protocol or eye-tracking process study. They may not either accept installing keystroke logging software on their PCs. According to Ehrensberger-Dow (2014), ‘in the workplace, computer logging is more commonly known as spyware and has received a great deal of bad press. One challenge in including it in a workplace study is to convince the translators that the motivation to use it is to gain information about translation processes and not about them’ (p. 371). In such cases, using retrospective interviews or process questionnaires will be the only alternative. It will be noted also that the comparative approach is very dominant in these process studies; for example, by comparing the performance of professional versus trainee translators/interpreters, expert versus novice translators/interpreters, female versus male translators/interpreters, bidirectional versus unidirectional translators/interpreters, or by comparing the participants’ performance when translating/interpreting into Language A versus Language B, or doing translation tasks in different language pairs (e.g., German–Italian vs. Spanish–Italian). In their process data analysis presentation, translation and interpreting researchers have mostly relied on one of two approaches: a) the quantitative approach entailing the presentation of process data in tables summarizing the frequencies of strategies and their types; and b) the qualitative approach which depends on including some examples of participants’ strategies and commenting on them. Some few process studies have combined the two approaches. It will be noted also that compared to all other translator/interpreter education research types (with the exception of training effectiveness studies), the numbers of the participants taking part in the process studies are much fewer. As implied above, this is ascribed to the nature of data collected and the complexity of its analysis. In many of the translation and interpreting process studies, researchers relate participants’ process data to particular aspects in their translated and interpreted texts (i.e., the product) to examine the potential relationship between the two variables. In what follows, exemplary translation and interpreting process studies will be highlighted in two main sections. Through the review of these studies, readers will see the key process issues addressed and the research methodological approaches adopted.

90

5 Translation/Interpreting Cognitive Process Research

5.2 Translation Process Studies The studies researching the translation process try to explore what is involved in translating texts. As implied in the above section, the translation process is characterized by its complexity and interactive nature of its elements. The act of translating a text has been defined as a ‘complex cognitive process which has an interactive and non-linear nature, encompasses controlled and uncontrolled processes, and requires processes of problem-solving, decision-making and the use of strategies’ (Hurtado Albir 2001, p. 375). Based on their review of translation process research and models, Hurtado and Alves (2009) summarize the following characteristics of the translation process: 1. The existence of basic stages related to understanding and re-expression. … 2. The need to use and integrate internal (cognitive) and external resources. … 3. The role of memory and information storage. 4. The dynamic and interactive nature of the process, which encompasses linguistic as well as non-linguistic elements. 5. The non-linear nature of the process. … 6. The existence of automatic and non-automatic, controlled and uncontrolled processes. 7. The role of retrieval, problem-solving, decision-making and the use of translation-specific strategies in the unfolding and management of the process. 8. The existence of specific characteristics, depending on the type of translation. (pp. 62–63)

In studying these complex features of the translation process, researchers have followed one of two approaches: the macro approach entailing exploring the translation process as a whole, and the micro one involving the investigation of a particular component in it. With regard to the latter type, three main research trends of it have occurred. These trends are represented in the studies investigating translator use of resources, their revisions, and their problem-solving processes. In what follows, the author provides an overview of all these research types.

5.2.1 Researching the Translation Process from a Macro Approach The studies examining the translation process from a macro perspective have focused on it as a whole without looking only at a particular component in it. In other words, these studies come up with data profiles related to many components or stages in the translation process rather than one of them. The studies reported by Dragsted and Carl (2013), Heeb (2016), and Pacte Group (2019) are representative of this translation process research. Dragsted and Carl (2013) combined keylogging and eye-tracking data to compare the translation processes of professional translators and graduate students (n = 12 in each group) while translating three texts of different complexity levels. Specifically, they collected data about professional and student translators’ source text reading during the translation task at drafting phase, and about their looking ahead reading (measured by eye-fixation on the source text word or phrase being translated) or

5.2 Translation Process Studies

91

looking back reading (measured by eye-fixation on the text that had already been translated). The keystroke logging data helped Dragsted and Carl count the revisions the translators made at two stages: the online revisions (the changes made during the drafting stage) and end revision (the changes made after the drafting stage). In each of the two stages, they counted the number of deletion and editing changes made and the amount of time taken in doing these revisions. On the other hand, Heeb (2016) compared the translation processes of six bidirectional translators (those translating from their L1 into L2 and from L2 into L1) and 12 unidirectional translators (those translating from their L2 into L1). She combined keystroke logging data with eye-tracking, computer screen recording, and retrospective comments. Heeb conducted the retrospective translation process sessions as follows: [T]he screen recording of the individual translation process, which had been enhanced by visualized eye-tracking data (i.e. fixations as orange dots and saccades as lines), was shown to the participants, who were asked to comment in the language of their choice. The participants were not prompted to comment on anything in particular, but were encouraged to talk freely about what came to their mind when watching the video. If they paused for more than about a minute, they were requested to continue. These cued retrospective verbalisations were recorded, transcribed …. and then analysed. (p. 80)

Heeb’s data analysis categories focused on: translators’ attention to the literal transfer of single words, phrases, and sentence structure, their concern with issues of target text quality (style, cohesion, and use in context) and loyalty to the source text, and their awareness of the audience. In a research project reported by Pacte Group (2019), data was collected about the translation process activities of 130 students in direct and inverse translation tasks using computer screen recording and a questionnaire about translation problems. The Pacte Group research team tried to examine the developments in the translation process of the students attending different levels in a translation programme. In their data analysis, they focused on analyzing the time spent on each translation task type, and the translation process efficacy level, and inter-level differences.

5.2.2 Researching Translator Use of Language and Information Resources For many translators, using language and information resources is part and parcel of their translation process. Such use is now is viewed as an ‘instrumental subcompetence’ of translation (Kuznik and Olalla-Soler 2018, p. 19). In the past, translators’ resources were only available in printed format. Nowadays, translators have at their disposal a variety of electronic and printed resources that they can use for facilitating their work and task performance. With the increasing availability of electronic translation-related resources, researchers have started to investigate the dynamics of translator use of language and information resources while performing their tasks.

92

5 Translation/Interpreting Cognitive Process Research

A look at some pertinent studies conducted in the past decade can easily reveal the major developments taking place in this research area at the methodological level. In many of the studies whose data was collected in the first half of the past decade, we can note their dependence on traditional research methods. For example, Hirci (2012) tried to explore how the availability versus unavailability of electronic resources impact students’ translation difficulties. In Hirci’s study, 20 students translated two texts from Slovene into English. They were divided into two groups (n = 10 each) with different conditions for their access to bilingual and monolingual printed and electronic resources (Internet search engines, websites, and online dictionaries) while performing the translation task. Hirci probed the students’ use of these resources through using pre- and post-experiment questionnaires eliciting their taskspecific use of resources and also their views on the translation tools and information resources they generally use while translating. On the other hand, Fernández (2015) investigated the Web resources four medical translation students used while performing two translation tasks. The translation process data in this study was collected using the students’ think-aloud protocols, computer screen recordings, and retrospective questionnaires. Fernández analyzed the process data in terms of the frequency and amount of time of the students’ use the Web resources during the task, the language of these resources (bilingual vs. monolingual) and the relationship of the students’ Web resource use with their translated texts and perceived task difficulty. In a longitudinal study, Kuznik and Olalla-Soler (2018) investigated the acquisition of resource use process skills by five groups of translation students (n = 130). The indicators they used for measuring the resource use process skills were: ‘number of resources, time taken on searches, time taken on searches at each stage, [and] number and variety of searches’ (p. 19). The data about these indicators were related to the students’ translation process and product quality features. In the relevant studies published more recently, we can note a novelty in the topics investigated and the research methods used. For example, Bundgaard and Christensen (2019) investigated the resources used by seven translators in postediting English–Danish machine-translated technical texts and the way they interact with these resources. They depended on computer-keystroke logs in calculating the frequencies of the translators’ use of seven resources (concordance search, Termbase search, Google search, Webpage search, online dictionaries, offline dictionaries, and reference texts). To examine the translators’ interaction with these sources, Bundgaard and Christensen identified the keystroke logging-recorded segments of the translators’ resource consultations, and then retrospectively interviewed them about their consultation interactions. Based on their results, Bundgaard and Christensen conclude that ‘in some cases, concordance searches are instances of cognitive friction in the sense that they disrupt translators’ technology-aided cognitive processes…[but] it is difficult to determine when they are disrupted’ (p. 13). Hvelplund (2017b; 2019) has made significant efforts in researching translator use of resources. In his (2017b) study, he employed eye-tracking and screen recording procedures to investigate the use of digital resources in both the drafting and revision of the translation process, and the types of digital resources used. He found that his 18 professional translator participants used the following five types of digital resources:

5.2 Translation Process Studies

93

bilingual dictionaries, monolingual dictionaries, Web search engines, information references, and conversion tools. Hvelplund (2019) also investigated 18 professional translators’ use of digital resources in their translation tasks by collecting eye-tracking and screen recording data. He specifically focused on the translators’ attention and cognitive effort while resorting to digital resource consultation and their processing flow during the digital resources-based task. An interesting data aspect in Hvelplund’s study is identifying the following patterns of translators’ use of the digital resources: source text to digital resources to source text, source text to digital resources to target text, target text to digital resources to source text, and target text to digital resources to target text. Hvelplund concludes that: Overall, the study confirms that the use of digital resources is a substantial part of the translation process which involves a variety of different subtasks. The findings concerning processing flow show that digital resources are introduced at several places in the translation process: to aid in the comprehension of source text as well as to support the reformulation process. These findings also indicate that the translation process is not a linear process as digital resources are introduced before source text reading. (p. 521)

In another recent study, Sycz-Opo´n (2019) used direct observation, think-aloud protocols, and computer screen recording to examine the information-seeking processes of trainee translators in their legal translation. During the translation task, the students were asked to ‘explain their decisions, comment on the quality of consulted sources, express satisfaction or criticism towards the sources, etc.’ (p. 155). Meanwhile, their resource consultation behaviors were also recorded using an observation guiding sheet. Analyzing the three data types, Sycz-Opo´n presented the student translators’ online information-seeking processes data according to the following categories: frequencies of lookups in various source categories, different types of lookups (printed vs. electronic sources, non-authored vs. authored sources, printed vs. electronic dictionaries, bilingual vs. monolingual dictionaries, specialized vs. general dictionaries, non-authored vs. authored dictionaries, and electronic dictionaries vs. other electronic sources), declared reasons for source preference, information sought in the source used, frequencies of particular lengths of searches, and level of satisfaction with lookup results.

5.2.3 Researching Translation Revision A large number of studies have been concerned with researching the revision processes of translated texts. Translation revision studies varied in their data sources. Some studies used interviews, think-aloud protocols (Shih 2006, 2015, respectively), or keystroke logging only, while other combined think-aloud protocols with keystroke logging, or keystroke logging with eye-tracking. Since translation revision has become a more technology-mediated process, the growing research methodological trend in the relevant studies is combining keystroke logging with eye-tracking data (e.g., Schaeffer et al. 2019).

94

5 Translation/Interpreting Cognitive Process Research

Robert and Brunette (2016) investigated the revision processes of 16 professional translation revisers. They collected their data using think-aloud protocols, short retrospective interviews, and revision product analysis. The data they included in their research report covered the types of the translators’ problem detection (e.g., vague, very vague) and problem diagnosis (intentional diagnosis, maxim-based diagnosis, and rule-based diagnosis). Schaeffer et al. (2019) compared the revising processes used by professional translators and translation students. They manipulated six previously translated texts by inserting some errors in them prior to assigning these texts to the participants for revision. The participants’ revision strategies were recorded using eye-tracking and keystroke logging. In their data analysis, Schaeffer and his colleagues focused specifically on counting the reading times on critical source text and target text items, and the errors corrected versus the ones not corrected. They conclude that: In sum, we can say that professional translators are more efficient in error recognition and correction, because they prioritize their search for errors more adequately than students by reading relevant contextual or source material when it actually matters, but not when it does not. In other words, professional translators are more strategic in terms of cost/benefit in their reading behaviour than students. (p. 600)

A growing research trend in translation process research is concerned with examining the strategies used in post-editing machine-translated texts. At the practical level, translators’ use of machine translation programmes has increased tremendously. That is why researching the cognitive processes of post-editing machinetranslated texts can inform us about the types of difficulties translators encounter when performing these tasks. Nunes Vieira (2017) highlights the need for this translation process type as follows: Post-editing of machine translation is gaining popularity as a solution to the ever-increasing demands placed on human translators. There has been a great deal of research in this area aimed at determining the feasibility of post-editing and at predicting post-editing effort based on source-text features and machine translation errors. However, considerably less is known about the mental workings of post-editing and post-editors’ decision-making or, in particular, the relationship between post-editing effort and different mental processes. (p. 79)

In the past decade, in particular, there has been an increasing number of studies on this topic. Some studies have compared the post-editing processes of machinetranslated text versus human-translated ones (e.g., Daems et al. 2017; Jia et al. 2019). Other studies have been concerned with comparing professional versus student translators’ machine-translated text post-editing strategies or comparing the post-editing strategies of texts translated by different machine translation programmes. In all these studies, the keystroke logging, eye-tracking, and think-aloud protocol data has been popular, respectively. Overall, these studies provide us with interesting insights about researching the machine-translated text post-editing process and analyzing the data gathered. On the other hand, Bundgaar et al. (2016) examined eight professional translators’ interactions with the translation memory and machine translation systems in the post-editing process of machine-translated technical texts. They analyzed the

5.2 Translation Process Studies

95

translators’ interactions through analyzing the Accept, Revise, or Reject categories in the keystroke logging data. They found that: [O]ut of the 76 TM [translation memory] and MT [machine translation] matches that were offered to the translator during the process…, the translator chose to reject none, to accept 32 and to revise 44. We were surprised that no MT matches were rejected as these, unlike TM matches, are machine-generated and therefore could be expected to be of poorer quality. However, the behaviour of the translator in the experiment indicates that s/he generally valued all kinds of matches and tried to re-use as much of the provided matches as possible. (p. 124)

Nunes Vieira (2017) provided a different framework for analyzing translators’ post-editing processes based on the think-aloud data he collected. His participants’ think-aloud verbalizations on a machine-translated post-editing task were analyzed according to two main categories: (a) specific task foci: the thought processes related one of the linguistic aspects of the text (i.e., lexis, grammar/syntax, discourse, world knowledge, style, and orthography); and (b) and non-specific task foci: the thought processes not related to these linguistic aspects (reading and evaluation, and task procedures). Koglin and Cunha (2019) compared translation students’ cognitive efforts on two post-editing tasks: post-editing a Google machine-translated text, and post-editing a Systran machine-translated text. The two researchers collected eye-tracking data about the students’ mean eye-fixation duration distribution on machine-translated output while editing it, and supplemented this eye-tracking data with students’ recorded think-aloud verbalizations. They conclude that: Post-editing pure statistical MT [machine translation] output might be less effortful when conventional metaphors are analysed whereas hybrid MT system might provide some clues for making inferences in creative metaphors and, therefore, could have a positive effect on the post-editing effort. (p. 52)

de Lima Fonseca (2019) also used computer-keystroke logging data to examine a group of 43 novice, semi-professional, and professional translators’ efforts in post-editing English–Brazilian–Portuguese machine-translated texts. Her keystroke logging data was analyzed according to the following indicators: [T]emporal effort (task execution time, text production time, total pause time, and pause count), technical effort (numbers of insertions, deletions, navigation and return keystrokes, copy/cut-and-paste keystrokes, and mouse operations, as well as the total number of keystrokes and mouse operations), and cognitive effort (average fixation duration, fixation count, total gaze time, average pupil size, and duration of the longest fixation). (p. 552)

5.2.4 Researching Translation Process Problems Previous studies have investigated translation process problems from two angles: translator pausing and translation problem-solving. In written text production research, pausing is regarded as an indicator of a particular cognitive action or

96

5 Translation/Interpreting Cognitive Process Research

problem, and changes in attentional foci (Schilperoord 1996). Therefore, studying translator pausing in task performance can provide insightful information about their cognitive processes. In studying translator pausing, pauses are ‘defined by means of a predetermined cut-off point, and coded for [particular] temporal and spatial variables’ (Kruger 2016, p. 25). The two studies reported by Kruger (2016) and Martín and Cardona Guerra (2019) are representative of the translation process pausing research. Kruger (2016) studied the translation pausing behaviors of eight students by using eye-tracking and keystroke logging data. In her analysis of the keystroke logging data, Kruger focused on pause duration, and the syntactic location of pauses (i.e., sentence, clause, phrase, word, and Word-medial boundaries). As for the eye-movement data, it was used to identify the translators’ source and target text reading and their text movement. In a recent study, Martín and Cardona Guerra (2019) looked at two trainees’ pausing in their translation processes. They collected keystroke logging data from these two student translators over a number of translation sessions. After analyzing their data quantitatively, Martin et al. reached the following results: We found no correlations between (1) short, (2) mid, and (3) long pauses, which might– mainly but not exclusively–correspond, respectively, to (1) physical and typing phenomena; (2) cognitive and metacognitive activities (e.g., monitoring); and (3) unregistered cognitive processes including reassessments of the allocation of cognitive resources and/or attentiondrawing adjustments that would require subjects to recruit all cognitive resources devoted to typing. We also found most mid and long pauses to behave differently, in that most long pauses may be posited to break the action flow into TSs and also flag sudden changes of behavior, leading to broken words and online corrections. In contrast, mid pauses often separate words within a TS but also point to processing phenomena within words, that can be hypothesized to relate to the task. (p. 545)

More recent attention has been given to researching translator cognitive problem-solving. The dominant research approach in these studies is to depend on verbalization and/or observational data (i.e., keystroke logging and eye-tracking). These data sources were used, for instance, in investigating translator use of monitoring and their process decision-making (e.g., Araghian et al. 2018; Ferreira et al. 2018; Schaeffer et al. 2019). Some other studies have adopted a qualitative approach to researching translation problem. For example, Mellinger (2019) used trainees’ reflective essays to investigate their medical translation process difficulties and awareness. Below is a brief description of the qualitative data collection procedure Mellinger followed: An associated corpus of open-ended student reflective essays on the translation process for this particular text was also compiled. These reflections include comments on any aspect of the translation process, and students were specifically asked to mention any problems or challenges that they encountered during the task and how they approached solving these problems….The translation tasks were assigned in class and students were given a week to complete the translation and to write their reflection on the translation task. Students were instructed to comment generally on their experience with the translation process and to indicate any problems or challenges that they faced. (pp. 609–610)

5.2 Translation Process Studies

97

Mellinger analyzed his participant translators’ reflective essays qualitatively by focusing on the thematic units related to their translation task awareness, problem recognition, evaluation and solution, and metacognitive knowledge development.

5.3 Interpreting Process Research As readers will note in the following subsections, interpreting process research has primarily been concerned with three fields: simultaneous interpreting, consecutive interpreting, and sight translation interpreting. As for the other fields of interpreting (specifically community interpreting), the interpreter processes in these fields have been studied from a role perspective, i.e., the role played by the interpreter rather than their cognitive processes (see Chap. 7). The interpreting process studies published so far have dealt with interpreter cognitive processes using one of two approaches: the macro approach which entails profiling the interpreting strategies used in a particular setting and the micro approach which focuses on one interpreting strategy or processing dimension. The studies representing these two approaches are highlighted in the next two subsections.

5.3.1 Profiling Interpreter Strategies A number of research works have tried to profile the cognitive strategies used by interpreters. According to Pöchhacker (2004), many interpreting process studies have adopted Dijk and Kintsch’s (1983) discourse comprehension and production model in profiling interpreter strategies. Several definitions have been given to ‘interpreting strategies’, for example, the ‘coping tactics’ used by interpreters (Gile 2009, p. 191), the processing operations the interpreter uses at both the listening and production stages (Kalina 1994), the ‘intentional and goal-oriented procedurals to solve problems resulting from the interpreters’ processing capacity limitations or knowledge gap, or to facilitate the interpreter’s task’ (Li 2013, p. 106), and the processes ‘used deliberately to prevent or solve potential problems in interpreting or to enhance interpreting performance’ (Dong et al. 2019). Based on these definitions and the interpreting strategy frameworks proposed in the relevant literature, we can generally define interpreting strategies as the processes and actions interpreters use to communicate the source text message effectively or to cope with a problem while performing the task. For a comprehensive review of the terms and issues discussed in interpreting process research, see Pöchhacker (2004). Besides, there are some other works that deal with the models and dynamics of interpreter cognitive processes (e.g., Ahrens 2017; Christoffels and De Groot 2005; Gile 1995, 1997, 2009; Jones 2008; Kalina 1994, 1998a; Moser-Mercer 2000).

98

5 Translation/Interpreting Cognitive Process Research

The research concerned with profiling interpreter cognitive strategies and processes focuses on identifying all what is involved in performing a given interpreting task. Therefore, it is intuitive to find this research using a shorter or smaller verbal data part as compared to the research addressing one interpreting process dimension. In this latter research type, researchers count the instances of using a interpreting strategy, and therefore they use a larger data set. Kalina (1998a) proposed a framework for simultaneous interpreting strategies. The framework categorizes these strategies into two main categories: (a) comprehension enhancing strategies: preparation, inference, anticipation, and chunking; and (b) target text production strategies: source text and target text conditioned strategies, and emergency, repair, and monitoring strategies. This strategy framework was adapted in the study reported by Liontou (2011) about German-toGreek simultaneous interpreting. Some similar framework was also used by Donato (2003), who explored the strategies student interpreters use in their English-Italian and German–Italian simultaneous interpreting (i.e., two different language pairs). Ten participants in this study performed an English–Italian simultaneous interpreting task and ten other ones performed a German-Italian one. Donato analyzed the students’ recorded interpreting data using a strategy framework that was developed based on reviewing previous relevant works (e.g., Gile 1995; Kalina 1998b; Kohn and Kalina 1996). Donato’s simultaneous interpreting strategy framework includes the following three main categories: (a) comprehension strategies: stalling or buying time, anticipation, and time-lag; (b) reformulation strategies: morphosyntactic reformulation, synthesis by generalizing, simplifying or deleting, and expansion through addition, repletion, and paraphrasing; (c) emergency strategies: transcoding, approximation, evasion, and substitution. Bartłomiejczyk (2006) compared the strategies used by 36 students when simultaneously interpreting into two directions: from English (B) into Polish (A), and from Polish (A) into English (B). She depended on the discourse analysis of the audiorecorded interpreting data, and the students’ retrospective accounts. Based on her data analysis, Bartłomiejczyk developed a taxonomy of 21 simultaneous interpreting strategies. On the other hand, Shen et al. (2019) explored the interpreting strategies used by consecutive interpreters when encountering uncertainty problems in conference interpreting situations. In the consecutive conference interpreting corpus Shen et al. analyzed, that interpreters were found to manage uncertainty problems using particular strategies. Below are the summarized findings of the study: (1) self-corrections, repetitions, and reformulations occur less frequently than pauses, indicating expert interpreter’s better control of interpreting fluency; (2) speakers may impact interpreters’ hesitation with segment length positively correlated with interpreters’ pauses, self-correction, and reformulation, and speaking rate explains the variance in the occurrence of filled pauses; (3) pauses occur for retrieving lexical and morphological information, eliminating logical doubt, and explicating cultural connotation; (4) expert interpreters adopt addition and rank shift more than ellipsis, simplification, splitting, and repetition as uncertainty management strategies, showing an emphasis on adequacy, comprehensibility, and acceptability in their output. (p. 135)

5.3 Interpreting Process Research

99

Since most interpreting process studies depend on the discourse-based mental modeling of interpreter strategies through analyzing recorded verbal data, they tell us much more about interpreter production strategies than their listening and comprehension ones. Arumí Ribas (2012) reported a study which fills in this research gap, and also provided an example of how to use retrospective questionnaire data in interpreting process studies. She examined the consecutive interpreting strategies and difficulties from another angle. She compared the interpreting strategies and problems of students at two different stages of training by using a retrospective post-questionnaire. The students were allowed to take notes about the topic prior to performing the interpreting task, and the experimentation procedures were as follows: Before playing back the speech to the students, the class instructor read out an introduction to the topic that they had been given, along with instructions on how to perform the exercise. The students completed the exercise individually, one after the other, without having the opportunity to listen to the versions of their classmates. The students took notes, audio recordings were made of their performances, and immediately afterwards they were given the post-interpreting questionnaire. (p. 817)

The post-interpreting questionnaire used in Arumí Ribas’s study included questions about the students’ task-specific difficulties and strategies. Below are some of these questions: Did you encounter any difficulties understanding when listening to the speech? Please specify (Description. What do you think this was due to? What did you do about it?, Do you think you found a satisfactory solution? Other comments) What difficulties did you find when it came to taking notes? What difficulties did you find when it came to reproducing your notes? Did you encounter any difficulties when it came to expressing ideas in the target language? Did you have problems concentrating? Did your memory let you down at any point? (p. 835)

Combining the analysis of the student interpreters’ answers to the retrospective questionnaire with their notes, Arumí Ribas (2012) provided a taxonomy of their consecutive interpreting difficulties in four main categories: (a) listening and understanding (e.g., lack of understanding of the source speech); (b) note-taking: (e.g., information density); (c) decoding notes: (inability to understand one’s own notes, and memorization problems); and (d) expressing and reformulating: (e.g., feeling nervous, and overusing connectors). Arumí Ribas also organized the students’ interpreting strategies under the same four categories as follows: (a) listening and understanding (e.g., generalizing, paraphrasing and summarizing); (b) note-taking: (e.g., omitting and resorting to memory); (c) decoding notes: (speeding up the reformulation and changing the order); and (d) expressing and reformulating: (e.g., trying to calm down, and avoiding calques). As noted in this subsection, the majority of interpreter process studies have been concerned with either consecutive or simultaneous interpreting. Very scarce studies examined the cognitive strategies of public service interpreters. Using the discourse analysis approach in a later process study, Arumí Ribas (2017) investigated the strategies used by public service interpreters/mediators in their socio-educational context. She analyzed the simulations of interactions performed by ten professionals

100

5 Translation/Interpreting Cognitive Process Research

(five Chinese–Spanish/Catalan interpreters and five Darija Arabic–Spanish/Catalan ones). Each professional interpreter performed three simulated interactions in educational settings. The audio–video-recorded simulated interaction data was also supplemented by retrospective interviews with the interpreters based on some notes taken. Based on her analysis of the data, Arumí Ribas identified five types of strategies used by the interpreters in their interactions; these are – Managing cultural references: the interpreter’s tendency to give an expanded rendition for clarifying a particular cultural reference. – Managing interaction: through raising of questions to explain, or confirm ambiguous information, simplifying the information, or directing decisionmaking. – Power management: an example of this strategy is inviting an interlocutor to give further information about the target topic. – Empathy and emotion management: the interpreter’s empathy or concern for the interlocutor indicated by using interjections, non-verbal language, or some other related discourse markers. – Using professional techniques: such as note-taking or re-structuring or rearranging information. Criticizing the previous frameworks of interpreting strategies in neglecting novice interpreter strategies and encompassing overlapping ones, Dong et al. (2019) developed a list of 22 consecutive interpreting strategies through comprehensively reviewing relevant literature and choosing appropriate strategy labels, and consulting expert interpreters and interpreting teachers. Below is the list of these 22 strategies and their definitions: (1) Adaptation: adjusting word choices in TL output…; (2) Addition: adding words or clauses in the TL output in order to complement an SL message…; (3) Anticipation: anticipating upcoming SL information or expressions…; (4) Approximation: paraphrasing or using an approximate translation when the interpreter cannot access the ‘ideal’ translation in time …; (5) Compression: expressing succinctly and concisely in the TL by removing redundancy…; (6) Explicitation: making what is conveyed in the SL more explicit in the output of the TL …; (7) Guessing: inventing a speech segment … when failing to catch, comprehend, or recall the original SL message…; (8) Inferencing: reconstructing SL information according to context, background knowledge…; (9) Informing the client of an interpreting problem …; (10) Not repairing information unless it is critical; (11) Offering an alternative translation in a parallel structure…; (12) Personal association and involvement …; (13) Preparing: making pre-task preparation for an interpreting task…; (14) Reproduction: using SL expressions directly in the TL …; (15) Skipping: omitting a certain SL segment when…failing to find the proper translation …; (16) Stalling: buying time to recall SL messages, to read notes, or to look for a proper TL expression by slowing down the speech rate…; (17) Substituting: paraphrasing or repeating previous interpreting output instead of translating the current SL segment so as to avoid embarrassment …; (18) Taking advantage of cohesive and coherent devices in the SL…; (19) Transformation: departing from the word order, sentence structure or sentence order in the SL and expressing the meaning of the input with a different word order, sentence structure or sentence order in the output…; (20) Using formulaic expressions…; (21) Visualisation: generating mental pictures of the SL message in order to recall the SL information more efficiently…; (22) Word-for-word translation. (pp. 423–424)

5.3 Interpreting Process Research

101

Dong and her colleagues used this 22-strategy framework in coding the consecutive interpreting strategies in the transcript of the target text interpreted by their participants. They also used retrospective interviews accompanied by the participants’ recorded interpreting data to identify the purpose of using the strategies.

5.3.2 Researching a Particular Interpreting Strategy/Interpreting Processing Dimension Another group of interpreting process strategies have focused on researching a particular interpreting strategy/interpreting processing dimension. These studies aim at examining the target interpreting strategy or strategic dimension from an indepth research angle. The strategies and dimensions investigated so far in these studies include, among others, interpreter note-taking, self-repair, compensation, anticipation, explicitation, and visual processing. Some interpreting process studies have focused on interpreter note-taking strategies (for a comprehensive review, see Chen 2016). It can be noted that most, if not all, of these note-taking studies have been primarily concerned with consecutive interpreting. Note-taking plays a central role in Gile’s (1995) effort model, which conceptualizes consecutive interpreting as a process composed of a comprehension (or listening and note-taking), a speech production (or reformulation), and memory efforts. Two early studies reported on interpreters’ note-taking were conducted by Seleskovitch (1975) and Kirchhoff (1979). The two studies were not published in English. Seleskovitch (1975) looked at the notes taken by a group of professional interpreters and their types. Kirchhoff (1979) examined the linguistic structures of interpreters’ notes. Several data sources have been used in interpreter note-taking research. These include: written notes, video recording of the note-taking process (e.g., Andres 2002), and tablet digital pen recording (e.g., Chen 2017). Dam (2004, 2007) also made noticeable efforts in researching interpreters’ note-taking. In her (2004) study, Dam collected eight note-taking data sets (total number = 119 note units) from four MA student interpreters who performed two Spanish–Danish/Spanish–Danish consecutive interpreting tasks. She analyzed the note-taking data collected according to the choice of the note-taking language (source vs. target language, and A- vs. B-language). In the other studies making use of video recording of interpreters’ note-taking behaviors and content, the data was analyzed in terms of the length (i.e., number of words), duration, and speed (words per minute) of the notes taken (Andres 2002; Abuín González 2012). In the Iranian context, Morani and Tabrizi (2017) examined the notes taken by five professional consecutive interpreters and through collecting their textual notes. In analyzing the textual notes collected, Morani and Tabrizi used the following categories: choice of words (letters vs. symbols, and full words vs. abbreviations) and choice of language (source vs. target language).

102

5 Translation/Interpreting Cognitive Process Research

Some of these reviewed studies also examined the relationship between the types of the notes taken and the interpreters’ performance. Chen (2017) approached interpreter note-taking from a novel and more comprehensive angle. She collected note-taking data from five professional interpreters in Australia through recording their digital pen activities on a tablet while performing two simulated Chinese–English/English–Chinese interpreting tasks. She also supplemented these digitally recorded notes with video-recording of the interpreting performances which were rated later, and with retrospective interviews with the five interpreters. The retrospective interviews were conducted as follows: Immediately after the tasks, the participants were provided with their notes for cued retrospection. They were asked to provide as much information as they could remember about the note-taking process, including but not limited to: what each note unit was; what it stood for; whether it was symbol or language, and if language, whether it was abbreviation or full word, Chinese or English. (p. 9)

Chen analyzed the interpreters’ notes according to the following units: symbol, number, and language (Chinese vs. English, and full word vs. abbreviation). She also analyzed the digital pen stroke movement data according to the distance, duration, and speed of note-taking. Chen used the interview data to show the why of the five interpreters’ note-taking processes, and also related the interpreters’ note-taking features with their rated interpreting performance. Chen’s study reached the following conclusions: [F]irstly, interpreters preferred language to symbol, abbreviation to full word, and English to Chinese, regardless of the direction of interpreting. Secondly, the interpreting performance seemed to be subject to variances in both the quality and quantity of notes. Thirdly, the physical and temporal demands of different note-taking choices, as indicated by the pen data of distance and duration, appeared to be: language higher than symbol, full word higher than abbreviation, and Chinese similar to English. Fourthly, the cognitive load induced by different note-taking choices, as indicated by the ear-pen span, appeared to be: symbol higher than language, full word higher than abbreviation, and Chinese higher than English. (p. 20)

Self-repair is an interpreting strategy that indicates the interpreter’s evaluation of their output or interpreted utterances as not meeting the communicative meaning. In other words, this strategy reflects the interpreter’s self-monitoring of their interpreting output. Petite (2005) depended on the qualitative approach to analyzing the interpreter self-repair in an English–French/German interpreting corpus. In her research report, she provided eight examples and a limited quantitative analysis of the data. Petite concludes that: [I]nterpreters not only repair errors, but take time to attend to their outputs for different reasons. The various dimensions of repair mechanisms … give us some insights into the interpreter’s mind at work, or the interpreter’s deployment of processing capacities and decision-making processes” (p. 27)

Recently, Magnifico and Defrancq (2018) examined gender-related differences in simultaneous interpreters’ use of self-repair. They identified the interpreters’ selfrepair uses in a number of French–English/Dutch interpreted speeches and compared female versus male interpreters’ use of them. In another recent study, Wenwen and

5.3 Interpreting Process Research

103

Yang (2019) investigated the self-repairs used by eight postgraduate consecutive interpreting students. Based on their analysis of the verbal data, they found that the student interpreters used the following types of self-repairs: same or different information repair (by repetition, repetition, deletion or addition), error repair (of a vocabulary or phonetic error), appropriateness repair and failure. Wenwen and Yang also attributed their participants’ use of self-repairs to the following causes: bilingual problems, skills problems, and other reasons (e.g., fluctuating mood or anxiety). On the other hand, Van Besien and Meuleman (2004) looked at simultaneous interpreters’ use of repair from another angle by examining the strategies they adopted in dealing with speakers’ errors and repairs. They analyzed the speeches interpreted from Dutch into English by two professional interpreters. They found that the ‘interpreters corrected the speakers’ unrepaired errors and translated the speaker’s repairs without translating the original utterance’ (p. 59). Some scarce studies also probed the interpreter use of compensatory strategies. Færch and Kasper (1980) define compensatory strategies as ‘potentially conscious plans for solving what to an individual presents itself as a problem in reach-ing a particular communicative goal’ (p. 92). Al-Khanji, El-Shiyab, and Hussein (2000) examined the compensation strategies used by four interpreters in their English– Arabic simultaneous interpreting. They located a total number of 234 instances in 4-h recorded interpreting data. Al-Khanji and his colleagues found that the four interpreters used the following compensatory strategies: skipping, approximation, filtering, omissions, and substitution. Tang and Li (2016) compared the explicitation patterns used by professional and trainee consecutive interpreters. Vinay and Darbelnet (1985) define explicitation as ‘a stylistic translation technique which consists of making explicit in the target language what remains implicit in the source language because it is apparent from either the context or the situation’ (p. 342). In the context of interpreting, Tang and Li define explicitation as ‘additions, being either quantitative or qualitative, made by the interpreter when s/he provides additional information which can be inferred from the context (the co-text, situation and culture), (236). Tang and Li recorded the performances of 12 professional interpreters and 12 trainee interpreters while interpreting a 7-min speech excerpt from English into Chinese. The audio-recorded interpreting data were transcribed and analyzed through identifying the explicitation patterns in the target interpreted texts and comparing them to the source speech one. Tang and Li identified the types of explicitation in the interpreting data, and found that the interpreters used each for one of these strategic purposes: ‘(1) stalling, delaying delivery and gaining extra processing time; (2) gap-filling, filling in the gap resulting from information loss; (3) clarifying, explaining the original message to minimize listeners’ comprehension efforts; (4) appraising, highlighting the speaker’s attitude’ (245). Fu and Chen (2019) also examined the use of explicitation in both consecutive and simultaneous interpreting modes. They depended on the qualitative analysis of some explicitation examples used by government press conference interpreters.

104

5 Translation/Interpreting Cognitive Process Research

In a simultaneous interpreting study, Van Besien (1999) looked at the use of anticipation, which he defines as the ‘interpreter’s production of a constituent in the target language before the speaker has uttered the corresponding constituent in the source language. It is the result of hypothesizing on the content of the speaker’s utterance before it has been finished’ (p. 50). He analyzed anticipation instances in two German–French simultaneous interpreters’ performances, and found 78 in the 55-min interpreting data set, i.e., anticipation occurred every 85 s. Van Besien analyzed the anticipation instances located according to six constituents (verb, adverb phrase, noun phrase, conjunction, pronoun, and structural anticipation). He concludes that: Anticipation can be considered as an important strategy in simultaneous interpretation. … In most cases a verb was anticipated. This suggests that anticipation is a language-specific phenomenon…. Extralinguistic information like general and situational knowledge, and information obtained in the course of translation, seems to play the most important part in the interpreter’s hypothesizing of the speaker’s utterances. Purely linguistic knowledge plays only a minor part. (pp. 257–258)

Another interpreting process area that has received some recent attention is interpreter visual processing, i.e., their processing or the visual element related to the interpreting task. These studies have depended on eye-tracking to infer interpreters’ processes through their eye-movements. For example, Stachowiak-Szymczak and Korpal (2019) probed simultaneous interpreters’ visual processes by using eyetracking technology. In their study, 26 professional and 22 trainee interpreters performed a simultaneous interpreting task from an auditory input accompanied by a PowerPoint presentation reflecting its content. Eye-tracking has also been used in some sight translation studies to examine interpreters’ reading activities. Chmiel and Lijewska (2019) used eye-tracking to examine the differences between 24 professional and 15 trainee sign translation interpreters in syntactic processing. Specifically, they focused on how sigh translation interpreters process sentences with subject-relative clauses and object-relative clauses. Chmiel and Lijewska (2019) found that: [T]rainees took longer to achieve similar translation accuracy as professionals and viewed the source text less than professionals to avoid interference, especially when reading more difficult object-relative sentences. Syntactic manipulation modulated translation and viewing times: participants took longer to translate object-relative sentences but viewed them less in order to avoid interference in target language reformulations. (p. 378)

Recently, Su (2020) also conducted an eye-tracking study on the processes used by novice and professional sight translation interpreters. She examined her participant interpreters’ problem-solving strategies, textual processing patterns, and readingspeech coordination processes. Some other strategic dimensions have been explored in process interpreting studies. For example, Meuleman and Van Besien (2009) explored how professional simultaneous interpreters cope with syntactically complex sentences and with speeches delivered at a high delivery rate. They found that interpreters resort to tailing and segmentation strategies to overcome such difficulties. Other training-oriented interpreting processing issues include: interpreter use of reformulation strategies

5.3 Interpreting Process Research

105

(Gran 1998), and the relationship of conscious monitoring modalities with interpreter performance as measured by the errors (Darò et al. 1996). It is worth mentioning that many interpreting process studies are psycholinguistic-oriented. In other words, they have few implications for interpreter training. These studies deal, for instance, with issues such as the role of work memory in interpreting or cognitive loading (for a review of the studies examining the relationship between working memory and simultaneous interpreting, see Mellinger and Hanson 2019).

5.4 Conclusion In this chapter, the author has discussed methodological approaches used in translation and interpreting process research and provided examples of the studies representing the issues researched in each field. Due to the complexities of researching these issues, particular attention has been given here to providing a large number of examples showing how researchers have collected their data and analyzed it. Table 5.1 provides a summary of the research areas and issues addressed so far in translation and interpreting process studies. As has been noted above, different data sources provide us with different information about translator and interpreter strategies. This case is clearer in translation studies in particular. This issue needs to be considered when comparing research findings. Though a number of important issues have been researched in the translation and interpreting process studies published so far, this research area is still evolving. O’Brien (2015) has recently stated that ‘there is little doubt that the domain of cognitive translatology has matured over the last few years, but it is arguably still in its infancy’ (p. 12). In future translation process research, data triangulation needs to be more effectively implemented. The issue of modeling translator cognitive process still needs much research attention. In the interpreting process research area, there is a need for research examining interpreter strategy use from a micro perspective equally at both the reception and production levels. The more developed process research in both fields will definitely contribute to disclosing what is involved in the act of translation and interpreting, and this in turn will help in tailoring training in a way meeting trainees’ translation and interpreting processing needs. Table 5.1 Overview of the research areas and issues in translation and interpreting process studies Research areas

Main issues researched so far

Translation process • • • •

The whole translation process Use of resources while translating a text Translation revision Translation problem-solving (e.g., pausing and monitoring)

Interpreting process • Profiling interpreting strategies • Researching a particular interpreting strategy type (e.g., note-taking, self-repair, explicitation, and anticipation)

106

5 Translation/Interpreting Cognitive Process Research

References Abdel Latif, M.M.M. 2019a. Using think-aloud protocols and interviews in investigating writers’ composing processes: combining concurrent and retrospective data. International Journal of Research & Method in Education 42 (2): 111–123. https://doi.org/10.1080/1743727X.2018.143 9003. Abdel Latif, M.M.M. 2019b. Eye-tracking in recent L2 learner process research: A review of areas, issues, and methodological approaches. System 83: 25–35. https://doi.org/10.1016/j.system.2019. 02.008. Abuín González, M. 2012. The language of consecutive interpreters’ notes: Differences across levels of expertise. Interpreting 14 (1): 55–72. https://doi.org/10.1075/intp.14.1.03abu. Ahrens, B. (2017). Interpretation and cognition. In The handbook of translation and cognition, ed. J.W. Schwieter and A. Ferreira, 440–460. New York: Wiley. Al-Khanji, R., S. El-Shiyab, and R. Hussein. 2000. On the use of compensatory strategies in simultaneous interpretation. Meta 45 (3): 548–557. https://doi.org/10.7202/001873. Andres, Dörte. 2002. Konsekutivdolmetschen und notation. Frankfurt: Peter Lang. Araghian, R., B. Ghonsooly, and A. Ghanizadeh. 2018. Investigating problem-solving strategies of translation trainees with high and low levels of self-efficacy. Translation, Cognition & Behavior 1 (1): 74–97. https://doi.org/10.1075/tcb.00004.ara. Arumí Ribas, M. 2012. Problems and strategies in consecutive interpreting: A pilot study at two different stages of interpreter training. Meta 57 (3): 812–835. https://doi.org/10.7202/1017092ar. Arumí Ribas, M. 2017. The fuzzy boundary between the roles of interpreter and mediator in the public services in Catalonia: Analysis of interviews and interpreter-mediated interactions in the health and educational context. Across Languages and Cultures 18 (2): 195–218. https://doi.org/ 10.1556/084.2017.18.2.2. Barik, H.C. 1969. A study of simultaneous interpretation. Unpublished doctoral dissertation, University of North Carolina. Bartłomiejczyk, M. 2006. Strategies of simultaneous interpreting and directionality. Interpreting 8 (2): 149–174. https://doi.org/10.1075/intp.8.2.03bar. Bernardini, S. 2001. Think-aloud protocols in translation research. Target 13 (2): 241–263. Bundgaard, K., and T.P. Christensen. 2019. Is the concordance feature the new black? A workplace study of translators interaction with translation resources while post-editing TM and MT matches. The Journal of Specialised Translation 31: 13–37. Bundgaard, K., T.P. Christensen, and A. Schjoldager. 2016. Translator-computer interaction in action: An observational process study of computer-aided translation. The Journal of Specialised Translation 25: 106–130. Chen, S. 2016. Note-taking in consecutive interpreting: A review with special focus on Chinese and English literature. The Journal of Specialised Translation 26: 151–171. Chen, S. 2017. Note-taking in consecutive interpreting: New data from pen recording. The International Journal for Translation & Interpreting Research 9 (1): 4–23. https://doi.org/10.12807/ ti.109201.2017.a02. Chmiel, A., and A. Lijewska. 2019. Syntactic processing in sight translation by professional and trainee interpreters: Professionals are more time-efficient while trainees view the source text less. Target. International Journal of Translation Studies 31 (3): 378–397. https://doi.org/10.1075/tar get.18091.chm. Christoffels, I.K., and A.M.B. De Groot. 2005. Simultaneous interpreting: A cognitive perspective. In Handbook of bilingualism: Psycholinguistic approaches, ed. J.F. Kroll and A.M.B. De Groot, 454–479. New York: Oxford University Press. Daems, J., S. Vandepitte, R.J. Hartsuiker, and L. Macken. 2017. Translation methods and experience: A comparative analysis of human translation and post-editing with students and professional translators. Meta, 62 (2), 245–270. https://doi.org/10.7202/1041023ar. Dam, Helle V. 2004. Interpreters’ notes: On the choice of language. Interpreting 6 (1): 3–17.

References

107

Dam, Helle V. 2007. What makes interpreters’ notes efficient?: Features of (non-I efficiency in interpreter’s notes for consecutive. In Doubts and Directions in Translation Studies: Selected contributions from the EST Congress, Lisbon 2004 ed. Y. Gambier, Miriam Shlesinger, and Radegundis Stolze, 183–197. Amsterdam and Philadelphia: John Benjamins. Darò, V., S. Lambert, and F. Fabbro. 1996. Conscious monitoring of attention during simultaneous interpretation. Interpreting 1: 101–124. de Lima Fonseca, N. B. (2019). Analysing the impact of TAPs on temporal, technical and cognitive effort in monolingual post-editing. Perspectives: Studies in Translation Theory and Practice 27(4): 552–588. https://doi.org/10.1080/0907676x.2019.1597909. Donato, V. 2003. Strategies adopted by student interpreters in SI: A comparison between the EnglishItalian and the German-Italian language-pairs. The Interpreters’ Newsletter 12: 101–134. Dong, Y., Y. Li, and N. Zhao. 2019. Acquisition of interpreting strategies by student interpreters. The Interpreter and Translator Trainer 13 (4): 408–425. https://doi.org/10.1080/1750399X.2019.161 7653. Dragsted, B., and M. Carl. 2013. Towards a classification of translation styles based on eye-tracking and keylogging data. Journal of Writing Research 5 (1): 133–158. https://doi.org/10.17239/jowr2013.05.01.6. Ehrensberger-Dow, M. 2014. Challenges of translation process research at the workplace. MonTI 355–383. http://dx.doi.org/10.6035/MonTI.2014.ne1.12. Englund Dimitrova, B., and E. Tiselius. 2009. Exploring retrospection as a research method for studying the translation process and the interpreting process. In: Methodology, Technology and Innovation in Translation Process Research, ed. Fabio Alves, Susanne Gopferich, and Inger M. Mees, 109–134. Copenhagen: Samfundslitteratur. Englund Dimitrova, B., and E. Tiselius. 2014. Retrospection in interpreting and translation: Explaining the process? MonTI 177–200. Ericsson, K.A. and H. Simon. 1993. Protocol analysis: Verbal reports as data. Cambridge, MA: MIT. Færch, C. and Kasper, G. 1980. Processes and strategies in foreign language learning. Interlanguage Studies Bulletin Utrecht 5: 47–118. Fernández, O. 2015. Exploratory research into the use of web resources of students enrolled in an introductory university-level medical translation course. MA thesis, Arizona State University, USA. Ferreira, A., A. Gottardo, and J.W. Schwieter. 2018. Decision-making processes in direct and inverse translation through retrospective protocols. Translation, Cognition & Behavior 1 (1): 98–118. https://doi.org/10.1075/tcb.00005.fer. Fu, R., and J. Chen. 2019. Negotiating interpersonal relations in Chinese-English diplomatic interpreting: Explicitation of modality as a case in point. Interpreting 21 (1): 12–35. https://doi.org/ 10.1075/intp.00018.fu. Gerloff, P. 1988. From French to English: A look at the translation process in students, Bilinguals, and Professional Translators. (Unpublished dissertation). Cambridge (MA): Harvard University. Gile, D. 1995. Concepts and models for translator and interpreter training. Amsterdam: John Benjamins. Gile, D. 1997. Conference interpreting as a cognitive management problem. In Cognitive processes in translation and interpreting, ed. J.H. Danks, G.M. Shreve, S.B. Fountain, and M. McBeath, 196–214. Thousand Oaks/London/New Delhi: Sage Publication. Gile, Daniel. 2009. Basic concepts and models in interpreter and translator training. Amsterdam: John Benjamins. Gran, L. 1998. Developing translation/interpretation strategies and creativity. In Translators’ strategies and creativity, ed. A. Beylard-Ozeroff, J. Králová, and B. Moser-Mercer, 145–162. Amsterdam/Philadelphia: John Benjamins. Hansen, G. 2006. Retrospection methods in translator training and translation research. Journal of Specialized Translation 5 (1): 2–41.

108

5 Translation/Interpreting Cognitive Process Research

Heeb, A.H. 2016. Professional translators’ self-concepts and directionality: Indications from translation process research. The Journal of Specialised Translation 25: 74–88. Hirci, N. 2012. Electronic reference resources for translators: Implications for productivity and translation quality. The Interpreter and Translator Trainer 6 (2): 219–236. https://doi.org/10. 1080/13556509.2012.10798837. Hurtado Albir, A. 2001. Traducción y traductología. Introducción a la traductología. Madrid: Catedra. Hurtado Albir, A., and F. Alves. 2009. Translation as a cognitive activity. In The Routledge companion to translation studies, ed. J. Munday, 54–73. New York: Routledge. Hvelplund, K.T. 2017a. Eye tracking in translation process research. In The handbook of translation and cognition, ed. J.W. Schwieter, and A. Ferreira, 248–264. New York: Wiley. Hvelplund, K.T. 2017b. Translators’ use of digital resources during translation. Hermes: Journal of Language and Communication in Business, 56, 71–87. Hvelplund, K.T. 2019. Digital resources in the translation process-attention, cognitive effort and processing flow. Perspectives 27 (4): 510–524. https://doi.org/10.1080/0907676X.2019.1575883. Jia, Y., M. Carl, and X. Wang 2019. How does the post-editing of neural machine translation compare with from-scratch translation? A product and process study, 31. Jones, R. 2008. Conference interpreting explained. London and New York/Shanghai: Routledge/Shanghai Waiyu Jiaoyu Chubanshe. Kade, O. 1968. Zufall und Gesetzmässigkeit in der Übersetzung, Leipzig. Kalina, K. 1994. Analyzing interpreters’ performance: methods and problems. In Translation studies: An interdiscipline, vol. 2, ed. Mary SnellHornby, Franz Pöchhacker, and Klaus Kaindl. John Benjamins: Amsterdam/Philadelphia. Kalina, S. 1998a. Strategische prozesse beim dolmetschen [Strategic Processes in Interpreting] Tübingen: Gunter Narr. Kalina, S. 1998b. Strategische Prozesse beim Dolmetschen: theoretische Grundlagen, empirische Fallstudien, didaktische Konsequenzen. Gunter Narr: Tübingen. Kirchhoff, Hella. 1979. Die notationssprache als hilfsmittel des konferenzdolmetschers im konsekutivvorgang. In Sprachtheorie und Sprachpraxis, ed. Walter Mair and Edgar Sallager, 121–133. Tübingen: Gunter Narr. Koglin, A., and R. Cunha. 2019. Investigating the post-editing effort associated with machinetranslated metaphors: a process-driven analysis. The Journal of Specialised Translation 31: 38– 59. Kohn, K., and S. Kalina. 1996. The Strategic Dimension of interpreting. Meta 41 (1): 118–138. https://doi.org/10.7202/003333ar. Krings, H.P. (1986). Was in den Köpfen von Übersetzern vorgeht. Eine empirische Untersuchung der Struktur des Übersetzungsprozesses an fortgeschrittenen Französischlernern. Tübingen. Kruger, H. 2016. What’s happening when nothing’s happening? Combining eyetracking and keylogging to explore cognitive processing during pauses in translation production. Across Languages and Cultures 17 (1): 25–52. https://doi.org/10.1556/084.2016.17.1.2. Kuznik, A., C. Olalla-Soler. 2018. Results of PACTE Group’s experimental research on translation competence acquisition. The acquisition of the instrumental sub-competence. Across Languages and Cultures, 19(1): 19–51. https://doi.org/10.1556/084.2018.19.1.2, https://akademiai.com/doi/ abs/10.1556/084.2018.19.1.2. Li, D. 2004. Trustworthiness of think-aloud protocols in the study of translation processes. International Journal of Applied Linguistics 14 (3): 301–313. Li, X. 2013. Are interpreting strategies teachable? Correlating trainees’ strategy use with trainers: Training in the consecutive interpreting classroom. The Interpreters’ Newsletter 18: 105–128. Liontou, K. 2011. Strategies in German-to-Greek simultaneous interpreting: A corpus-based approach. Gramma 19: 37–56. Magnifico, C., and B. Defrancq. 2018. Self-repair as a norm-related strategy in simultaneous interpreting and its implications for gendered approaches to interpreting. Target. International Journal of Translation Studies 31 (3): 352–377. https://doi.org/10.1075/Target.18076.Mag.

References

109

Martín, R.M., and J.M. Cardona Guerra. 2019. Translating in fits and starts: Pause thresholds and roles in the research of translation processes. Perspectives 27 (4): 525–551. https://doi.org/10. 1080/0907676X.2018.1531897. Mellinger, C.D. 2019. Metacognition and self-assessment in specialized translation education: Task awareness and metacognitive bundling. Perspectives 27 (4): 604–621. https://doi.org/10.1080/090 7676X.2019.1566390. Mellinger, C. D., and T.A. Hanson. 2019. Meta-analyses of simultaneous interpreting and working memory. Interpreting 21(2): 165–195. https://doi.org/10.1075/intp.00026.mel. Meuleman, C., and F. Van Besien. 2009. Coping with extreme speech conditions in simultaneous interpreting. Interpreting 11 (1): 20–34. https://doi.org/10.1075/intp.11.1.03meu. Morani, R., and H.H. Tabrizi. 2017. Professional interpreters’ notes in persian-english consecutive interpreting on the choice of form and language. Research in Language Pedagogy 5 (2): 133–146. Moser-Mercer, B. 2000. Simultaneous interpreting: Cognitive potential and limitations. Interpreting 5 (2): 83–94. Nunes Vieira, L. 2017.’ Cognitive effort and different task foci in post-editing of machine translation: A think-aloud study. Across Languages and Cultures 18(1): 79–105. https://doi-org.sdl.idm.oclc. org/10.1556/084.2017.18.1.4. O’Brien, S. 2015. The borrowers: Researching the cognitive aspects of translation. In Interdisciplinarity in translation and interpreting process research, ed. M. Ehrensberger-Dow, S. Göpferich, & S. O’Brien, 5–17. Amsterdam: John Benjamins. Pacte Group. 2019. Evolution of the efficacy of the translation process in translation competence acquisition. Meta 64 (1): 242–265. https://doi.org/10.7202/1065336ar. Petite, Christelle. 2005. Evidence of repair mechanisms in simultaneous interpreting. A corpus based analysis. Interpreting 7 (1): 27–49. Pöchhacker, F. 2004. Introducing interpreting studies. London and New York: Routledge. Robert, I.S., and L. Brunette. 2016. Should revision trainees think aloud while revising somebody else’s translation? Insights from an empirical study with professionals. Meta 61 (2): 320–345. https://doi.org/10.7202/1037762ar. Schaeffer, M.J., S.L. Halverson, and S. Hansen-Schirra. 2019a. Monitoring’ in translation: The role of visual feedback. Translation, Cognition & Behavior 2 (1): 1–34. https://doi.org/10.1075/tcb. 00017.sch. Schaeffer, M., J. Nitzke, A. Tardel, K. Oster, S. Gutermuth, and S. Hansen-Schirra. 2019b. Eyetracking revision processes of translation students and professional translators. Perspectives 27 (4): 589–603. https://doi.org/10.1080/0907676X.2019.1597138. Schilperoord, J. 1996. It’s about time: Temporal aspects of cognitive processes in text production. Amsterdam: Rodopi. Seleskovitch, Danica. 1975. Langage, langues et mémoire: étude de la prise de notes en interprétation consécutive. Paris: Minard Lettres Modernes. Shen, M., Q. Lv, and J. Liang. 2019. A corpus-driven analysis of uncertainty and uncertainty management in Chinese premier press conference interpreting. Translation and Interpreting Studies 14 (1): 135–158. https://doi.org/10.1075/tis.00034.she. Shih, C.Y. 2006. Revision from translators’ point of view: An interview study. Target 18 (2): 295–312. Shih, C. 2015. Problem-solving and decision-making in translation revision: Two case studies. Across Languages and Cultures 16(1). https://doi.org/10.1556/084.2015.16.1.4. Stachowiak-Szymczak, K., and P. Korpal. 2019. Interpreting accuracy and visual processing of numbers in professional and student interpreters: an eye-tracking study. Across Languages and Cultures 20 (2): 235–251. https://doi.org/10.1556/084.2019.20.2.5. Su, W. 2020. Eye-tracking processes and styles in sight translation. Germany: Springer. Sycz-Opo´n, J. 2019. Information-seeking behaviour of translation students at the University of Silesia during legal translation: An empirical investigation. The Interpreter and Translator Trainer 13 (2): 152–176. https://doi.org/10.1080/1750399X.2019.156507.

110

5 Translation/Interpreting Cognitive Process Research

Tang, F., and D. Li. 2016. Explicitation patterns in English-Chinese consecutive interpreting: Differences between professional and trainee interpreters. Perspectives 24 (2): 235–255. https://doi.org/ 10.1080/0907676X.2015.1040033. Tomlinson, B. 1984. Talking about the composing process: The limitations of retrospective accounts. Written Communication 1 (4): 429–445. Van Besien, F. 1999. Anticipation in simultaneous interpretation. Meta 14: 250–259. Van Besien, F., and Chris Meuleman. 2004. Dealing with speakers’ errors and speakers’ repairs in simultaneous interpretation. The Translator 10 (1): 59–81. https://doi.org/10.1080/13556509. 2004.10799168. Van Dijk, T.A., and W. Kintsch. 1983. Strategies of discourse comprehension. New York Academic Press. Vinay, J., and J. Darbelnet. 1958. Comparative stylistics of French and english: A methodology for translation. (J.C. Sager & M.J. Hamel, Trans. & Eds.). Amsterdam: John Benjamins. Wenwen, Z., Z. Yang. 2019. Self-repair in Simultaneous interpretation. In Proceedings of the 2nd international conference on cultures, languages and literatures, and arts (CLLA 2019), 224–229.

Chapter 6

Translation/Interpreting Product Research

Abstract In this chapter, the author discusses the areas and issues of translation and interpreting product research. Specifically, the research dealing with the following three areas is highlighted: (a) translation and interpreting quality (e.g., accuracy of legal and medical interpreting, translation errors associated with source text difficulty levels, and error types); (b) linguistic and pragmatic features in translated and interpreted texts (e.g., cohesion devices, lexical choices, lexical density, lexical dis/similarity in source text and target text, explicitation in translation, hedges, and modals); and (c) prosodic features in interpreting (pauses, intonation, speech rate and segmentation, accentuation and stress). The author highlights the methodological orientations in each research area and refers to some representative studies. It is generally concluded that not many developments have been made in translation and interpreting product research. Keywords Translation product research · Interpreting product research · Interpreting accuracy · Translation accuracy · Translation errors · Interpreting prosody

6.1 Introduction: Methodological Approaches Translation and interpreting product research examines the features of the written or spoken texts rendered by translators and interpreters. It may focus on the product performance of trainee or professional translators and interpreters only, or compare the translated and interpreted outputs produced by both groups. In some previous product studies, researchers have compared the performances of professional translators or interpreters with different competence levels. This research ultimately aims at identifying translation and interpreting performance difficulties, and helping translator and interpreter education community members become aware of the optimal features in translated or interpreted texts and the factors influencing these features. Since this research type is concerned with translation and interpreting products, it intuitively depends on analyzing the textual features in them. Translated and interpreted texts are usually analyzed from linguistic, error-focused, or discourseoriented perspectives. Therefore, we will note in the studies highlighted in the © Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_6

111

112

6 Translation/Interpreting Product Research

sections below that corpus-based analysis is a dominant research methodology in this research type. In translation and interpreting product studies, researchers either collect learner/trainee or professional translator/interpreter corpora themselves, or access authentic professional translator/interpreter corpora already found in some institutions. The process of analyzing interpreting product data first requires transcribing it, and therefore this process will likely be more time-consuming than working on translation product data analysis. Researchers have adopted various approaches to reporting translation and interpreting product data. Though the quantitative data analysis is generally more popular in these studies, some researchers have solely depended on reporting their data qualitatively by providing examples of the textual features examined. In some other studies, researchers have combined both quantitative and qualitative data analyses. In this chapter, three main types of translation and interpreting product studies will be highlighted. These studies have focused on researching translation and interpreting quality, the linguistic and pragmatic features in translated and interpreted texts, and the prosodic features in interpreter performance.

6.2 Researching Translation and Interpreting Quality The larger number of the studies published so far on translation and interpreting products has been concerned with examining their quality. It is worth to note that these translation and interpreting quality studies are different from the ones discussed in Chap. 4 (Sect. 4.8). While the studies highlighted in Chap. 4 deal with the translation and interpreting quality as perceived by users (i.e., user evaluation/reception studies), the ones discussed here focus on evaluating the quality of translation and interpreting products from a text-analysis perspective, and aim at understanding performance difficulties. Most of the studies falling in the translation and interpreting quality category have focused primarily on accuracy aspects and translator and interpreter success or failure in performing them. In their attempts to conceptualize translation and interpreting difficulties and problems, researchers developed some taxonomies of performance aspects at the textual level. In his seminal taxonomy of simultaneous interpreting performance problems, Barik (1971) described the various types of omissions (skipping, comprehension, delay, and compounding omissions), additions (qualifier, elaboration, relationship and closure additions), and errors or substitutions (mild semantic and gross semantic errors, mild phrasing, substantial phrasing changes, and gross phrasing changes). In another early taxonomy of interpreting problems, Kopczy´nski (1980) listed five categories of errors related to competence, performance, omissions and additions, appropriateness, and translation. On the other hand, Nord (1997) classified translations errors into four categories: pragmatic errors, cultural errors, linguistics errors, and text-based ones. In what follows, the author will highlight some of the studies researching translation and interpreting quality aspects.

6.2 Researching Translation and Interpreting Quality

113

6.2.1 Interpreting Quality Studies Two issues are noteworthy with regard to the interpreting quality studies published so far. First, most of these studies have primarily focused on accuracy aspects. Second, many of these studies have been conducted in the legal interpreting field (i.e., court and police investigative interview interpreting). This latter research trend seems to have been resulted from the expected consequences associated with inaccurate legal interpreting. Berk-Seligson (1999) found that court interpreters made errors in rendering 49.6% of courtroom leading questions due to either their failure to recognize the speaker’s intent or inadequate linguistic knowledge. Lee (2009) also examined the accuracy of interpreting witnesses’ inexplicit language in Korean–English interpreting in Australian courtrooms, and found that the inaccurate interpreting was caused by the lexico-grammatical differences in the two languages. Recognizing the difficulty of interpreting some questions in courtroom situations, Burn and Crezee (2017) analyzed the student interpreters’ errors in interpreting different legal question types. They engaged 17 undergraduate students in interpreting courtroom questions taken from YouTube clips. The two researchers described their study procedures as follows: Study participants completed one audiovisual task. …. Once students had interpreted the audiovisual tasks, the scripts, with the anonymised student recordings and associated audiovisual clips were posted online using the Blackboard learning management system used at the university. This material was accessed by the anonymous language assessors who are already familiar with the grading rubric through their work as external examiners. Assessors were asked to watch the audiovisual clips, listen to the student recordings and indicate on the script what sort of interpreting choices the learner had made. … markers were asked to focus on a limited number of features such as change, omission or addition. (p. 45)

Burn and Crezee (2017) analyzed the percentages of the students’ interpreting accuracy in each of the following courtroom question types: polar interrogative, whinterrogative, positive declarative, positive declarative with negative tag, negative declarative with negative tag, modal interrogative, imperative, and reported speech polar interrogative. On the other hand, Teng et al. (2018) reported a study in which they used a similar research method, but they focused mainly on the students’ accuracy in interpreting declaratives with tag questions. Based on the results of their (2017) study, Burn and Crezee conclude that: Student interpreters preparing for the courtroom environment clearly need to be explicitly taught the question forms prevalent in legal discourse, and the pragmatic purpose of ‘questions in disguise’ such as the imperative and the declarative. Educators must … remind students to avoid altering the illocutionary force of the questioning, thereby eroding the accuracy of the interpreting and distorting the testimony of the witness…. We suggest that to minimize errors trainee interpreters must spend time becoming familiar with all of the question types used in the courtroom, learning which types of questions predominate at different phases of examination and practicing and reflecting on how to accurately interpret them. Educators might want to focus on the question types that appear most frequently in this study. (p. 51)

114

6 Translation/Interpreting Product Research

Apart from the above studies drawing on real-life or simulated court interpreting data, some other studies have investigated the accuracy of police interview interpreting. For example, Hale et al. (2019) compared the performance of trained and untrained police interview interpreters. Their data consisted of audio–video-recorded simulated police interviews performed by 56 Spanish–English untrained and 44 trained interpreters. The transcribed interpreted texts were analyzed according to the following seven categories of interpreting performance: (a) accuracy of propositional content; (b) accuracy of style: the manner and style of the speech delivery, pitch, hesitations, and register; (c) legal discourse and terminology; and (d) bilingual competence: English and Spanish competence as assessed in terms of grammaticality, idiomaticity, and pronunciation. Based on their data, Hale and her colleagues also identified some language-related attributes of competent versus incompetent interpreting performance. The linguistic attributes of interpreting quality included: interpreting everything even when asked not to, and using correct legal terminology and style. On the other hand, Ouyang (2018) proposed a systemic functional linguisticsbased framework for assessing the quality of students’ consecutive interpreting performance. This framework includes three main interpreting quality assessment criteria: ideational meaning (accuracy), interpersonal meaning (appropriateness), and textual meaning (coherence). Määttä (2018) used these three quality assessment criteria in evaluating legal interpreting in an authentic police investigative telephone interview in which French was used as a lingua franca, a bridge, or common language between the interpreter and interviewee. Määttä analyzed the data qualitatively through providing transcribed examples showing the interpreter’s attempt to communicate the ideational meaning, interpersonal meaning, and textual meaning. In explaining the difficulties causing the interpreter’s noted errors in the examples given, Määttä viewed that: [T]he interpreter omits items from the interviewee’s speech because the source language is incoherent and confusing, and adds items to the officer’s speech in the form of reformulations because the interviewee does not seem to hear and/or understand the questions. These particularities of lingua-franca interpreting are accentuated by features related to telephone interpreting: while the interlocutors do not share the same linguistic resources, they are also unable to monitor and assess each other’s resources correctly due to the lack of non-verbal resources such as gaze, body position, and gesture. These constraints add to the interpreter’s responsibility for every aspect of the encounter: understanding an idiosyncratic source language, interpreting messages accurately, coherently, and cohesively, and coordinating turns. (pp. 13–14)

Some studies have focused particularly on omission errors in interpreters’ performance. It is worth to mention that when researching interpreter omission from a product-based perspective, we mean their non-strategic omissions (Cox and Salaets 2019). Lu (2018) examined the propositional information loss in simultaneous interpreting through analyzing a corpus of interpretations produced by 17 professional English–Chinese interpreters. According to Lu, the interpreters’ propositional omission or errors were caused by the following three types of factors: ‘operational constraints (concurrent listening and speaking, time constraint and incremental

6.2 Researching Translation and Interpreting Quality

115

processing), source language factors (speed, information density, accent, linguistic complexity, technicality, etc.) and interpreting direction (B to A)’ (p. 792). An important question that has been addressed in some product studies is related to the accuracy of consecutive interpreting versus simultaneous interpreting. In the published relevant studies, interpreter omission was used as the main indicator of such interpreting accuracy. It is generally hypothesized that consecutive interpreting is less accurate than simultaneous interpreting. Explaining the potential causes of this hypothesis, Gile (2001) points out that: In consecutive interpreting, just as in simultaneous and for similar reasons … interpreters may also wish to reduce the lag behind the speaker, but the effect on errors and omissions may be different. In particular, they may decide not to note some speech elements which they view as unimportant but which take a long time to note …, such as relatively unimportant modifiers and digressions (comments made and information given outside the speaker’s main line of reasoning). If they are not noted, there is a higher risk that they will be omitted during the reformulation phase. Note that in simultaneous, such unimportant speech elements can be reformulated at the speed of vocal articulation, so that their saturation-generating role may be less significant than in consecutive, hence a possibly weaker tendency to leave them out. In consecutive, enumerations may be rendered more incorrectly than in simultaneous because of the time lag associated with the slowness of writing (as compared to speaking), which will tend to overload working memory. (p. 12)

Gile (2001) tested this hypothesis using data collected from 20 professional interpreters who performed two interpreting tasks in both the consecutive versus simultaneous modes. He found more omitted sentences and overall inaccuracy in consecutive interpreting, whereas more omissions occurred simultaneously at the digressions and unimportant modifier levels. In a recent study, Cox and Salaets (2019) compared omissions in consecutive versus simultaneous interpreting by collecting data from nine trainee interpreters. They analyzed the omission percentages in the consecutive and simultaneous data collected according to the following categories: noun phrase, verb phrase, prepositional phrase, conjunctive phrase, adjective phrase, adverbial phrase, complete sentences, and other omissions. Cox and Salaets summarized the results and training implications of their study as follows: There was a significant difference between the interpreting modes. The total omission average…was 19.51% for consecutive interpreting, and 4.13% for simultaneous interpreting. In other words, consecutive interpreting was 15% (15.38%) more prone to omissions than simultaneous interpreting phrase types….The divergence between consecutive/simultaneous modes may have its origins in the cognitive load which is different in consecutive and simultaneous interpreting. Additionally it may concern the strategies interpreters apply to prevent cognitive saturation while retaining the core of the source text message. [The] student interpreters omitted more information when interpreting consecutively than when they were interpreting simultaneously. Awareness of this tendency may lead interpreter educators to offer trainee interpreters more consecutive interpreting/simultaneous interpreting exercises and perhaps asking students to reflect on what information they thought they had omitted and why. Identifying possible factors which play a role in omissions may help educators develop tailor-made exercises to help students improve. (p. 12)

In another comparative study, Swabey et al. (2016) compared the omissions and errors in both the simultaneous and American sign language interpreting modes. Their

116

6 Translation/Interpreting Product Research

data consisted of interpretations of President Obama’s 2009 inaugural address in the American sign language and three spoken languages. More omissions and lexical errors were found in the sign language interpreting mode than in the simultaneous one. Flores et al. (2003) evaluated accuracy in medical interpreting through collecting pediatric encounter data over a period of seven months. In their 474-page audio data transcripts, they found 396 interpreter errors, with a mean of 31 errors in each encounter. They calculated the frequencies and categories of these errors. Below is a summary of Flores et al.’s (2003) study results: The most common error type was omission (52%), followed by false fluency (16%), substitution (13%), editorialization (10%), and addition (8%). Sixty-three percent of all errors had potential clinical consequences, with a mean of 19 per encounter. Errors committed by ad hoc interpreters were significantly more likely to be errors of potential clinical consequence than those committed by hospital interpreters (77% vs 53%). Errors of clinical consequence included: (1) omitting questions about drug allergies; (2) omitting instructions on the dose, frequency, and duration of antibiotics and rehydration fluids; (3) adding that hydrocortisone cream must be applied to the entire body, instead of only to facial rash; (4) instructing a mother not to answer personal questions; (5) omitting that a child was already swabbed for a stool culture; and (6) instructing a mother to put amoxicillin in both ears for treatment of otitis media. (p. 6)

6.2.2 Translation Quality Studies Compared to its interpreting counterpart, it may be surprising to note that not much translation quality research has been published in English. A common characteristic of the studies highlighted below is that they all collected data from student translators. These studies addressed two main issues: the association of the source text difficulty with translation problems, and translation errors. Campbell (1999) tried to identify the components of translation competence via examining a number of textual features in 38 English-Arabic translation exam papers. The textual features analyzed include: lexical variety ratio, token misspelled, mean word length, words translated directly, changed and omitted (i.e., translation changes), and function words. Farghal and Shunnaq (1992) also investigated Arab students’ English legal translation errors. They analyzed these errors in terms of the syntactic, layout, and content features of legal discourse. Hale and Campbell (2002) also analyzed the accuracy of students’ translation and examined how it is influenced by the source text difficulty. In a longitudinal study, Kujamäki (2019) also tried to examine the influence of source text difficulty and interference on students’ translation errors. Data was collected from six students on two occasions (at the beginning and end of a BA study programme). Kujamäki provided qualitative examples and quantitative comparative data showing the ambiguity, lexis, structure. orthography, style, and co-text in the students’ target texts. On the other hand, some studies have focused on examining students’ translation errors without associating them with the source text difficulty level. For

6.2 Researching Translation and Interpreting Quality

117

example, Eades (2011) examined the difficulties Arab student translators encounter in translating modal expressions. The types of the errors made by the students were categorized. Eades concludes that: While the participants generally exhibited a sound knowledge of the dictionary meanings of the various modal expressions in the ST, the intended meaning conveyed by a given modal in the ST was frequently misconstrued in the translations, and many modal expressions in the ST were overlooked entirely. … [G]reater consideration of macro-textual factors (cohesion, text type, relationship between author and audience) through the utilization of top-down text processing skills is required in their translation training. (p. 283).

Saeed (2012) also compared the performance of both undergraduate and graduate Arab students’ in translating idioms. His study showed that the students encountered a wide range of errors in translation idioms. Saeed attributed these errors to the students’ inability to recognize idiomatic language features, preserve the meaning of the source language idiom, and to culture interference and inadequate linguistic knowledge. In a more recent study, Arhire (2017) examined the use of cohesive devices in learner English–Romanian translation corpus. Specifically, Arhire’s study focused on analyzing the problematic aspects in the students’ translation of ellipsis, substitution, and reference. The data of the study was presented qualitatively by providing translation error examples, and quantitatively through reporting the percentages of students’ translation solutions for translating these linguistics features. It is worth to note that very scarce studies have dealt with evaluating the students’ audiovisual translation products. In Ortiz-Boix and Matamala’s (2017) study, for example, the students’ post-edited audiovisual translation product was evaluated according to the adequacy, wrong translation, omission, addition, fluency, register, style, inconsistencies, spelling, typography, and grammar.

6.3 Researching Linguistic and Pragmatic Features in Translated and Interpreted Texts The studies dealing with this product research subarea are not concerned with identifying the problematic aspects in translated and interpreted texts but they try to profile the linguistic and pragmatic features in these texts. Therefore, these studies do not inform us about translation and interpreting performance problems, but about the linguistic and pragmatic features characterizing good translated and interpreted texts. Thus, the pedagogical insights obtained from these studies lie in raising translator and interpreter education community members’ awareness of the textual features trainees need to learn. Unlike the translation and interpreting quality studies, the majority of the studies falling in this category have depended on the professional corpora rather than learner ones. Overall, not many studies of this type have been published in English. That is why they do not have a clear place yet in translator and interpreter education research. The two studies reported by Tercedor (2010) and Dong and Lan (2010) are of the few ones conducted about the linguistic features in students’ translated texts.

118

6 Translation/Interpreting Product Research

Tercedor (2010) investigated how the presence of cognates in source texts influences advanced translation students’ lexical choices. On the other hand, Dong and Lan (2010) explored the use of grammatical and lexical cohesion devices in the English–Chinese/Chinese–English translations made by professional versus student translators. They analyzed cohesion devices in 315 translations made by 105 participant translators with three different levels of translation competence. The cohesion devices they analyzed are: lexical diversity, average word length, agentless passives, prepositional phrases, pronominal devices, demonstrative devices, definite articles, comparative devices, additive devices, adversative devices, temporal devices, and subordinating conjunctions. Based on their results, Dong and Lan conclude that: The difference found between experienced native English, and experienced native Chinesespeaking translators and those between two levels of native Chinese-speaking translators yield several pedagogical implications for translation into the second language. First, for experienced native Chinese-speaking translators, their use of nominalizations predominantly as a means to represent abstract notions in the Chinese source text is of particular concern. A possible solution allowing them to produce translations which would seem closer to the natural style of a native English-speaking translator is to consider increasing the use of nominalizations as a means of repackaging information instead of translating literally. They may also need to increase their use of demonstrative devices to strengthen coherence and increase the accuracy of their translation. For novice native Chinese-speaking translators, it may be more practical if goals can be set for them to first develop translation competence approximating that of an experienced native Chinese-speaking translator. They need also to reduce the excessive use of additive and adversative devices in their translation, which means they may need to increase sentence variety. Their use of temporal devices and definite articles also needs to be addressed in translation teaching. (p. 76)

The linguistic features-oriented interpreting studies available seem to have been only concerned with profiling the performance of professional interpreters. Dam (1998), for example, studied interpreters’ choice of lexical items by examining the lexical similarity and dissimilarity in both the source and target texts. Her study suggests that ‘form-based interpreting is more frequent than meaning-based interpreting’ (p. 49). Lv and Liang (2019) also looked at the information density, lexical repetitiveness, and lexical sophistication in both consecutive and simultaneous interpreting. As for the studies addressing pragmatic features, these seem to have only been conducted in the professional translation and interpreting context. For example, Vesterager (2017) looked at 10 expert and non-expert translators’ use of explicitation in their Spanish–Danish legal translation. In the context of translation, explicitation is defined as ‘the phenomenon which frequently leads to TT [target text] stating ST [source text] information in a more explicit form than the original’ (Shuttleworth and Cowie 1997, p. 55). Vesterager analyzed her 10 translator participants’ data qualitatively and also quantified the translators’ use of explicitation at the level of four linguistic units (nominalizations, passives, system-bound terms, and elliptical phrases). In another corpus-based study, Alasmri and Kruger (2018) examined the use of conjunctive markers in translated Arabic texts. Magnifico and Defrancq (2017) studied the gender-related differences in the use of hedges in simultaneous conference interpreting. They studied a corpus of 39 English

6.3 Researching Linguistic and Pragmatic Features in Translated …

119

and Dutch interpretations of 39 French speeches, 20 speeches were interpreted by women and 19 ones by men. Magnifico and Defrancq define the linguistic hedge as ‘a linguistic item which mitigates an utterance, conveying tentativeness and saving face for the interlocutor or the speaker’ (p. 28), and the English hedges they identified in the target texts included: ‘maybe, perhaps, sometimes, I think, I believe, seem/seems, for example, sort of, kind of, you know, I mean, a bit, a little, some, and rather’ (p. 31). The two researchers classified the hedges into five categories, and analyzed their data by comparing the frequencies of the female and male interpreters’ use of hedges as compared to the source texts. Another study on gender-related interpreting differences was reported by Hu and Meng (2018), who examined female and male interpreters’ choice of lexical items such as modals, intensifiers (maximizers, e.g. absolutely, completely; emphasizers, e.g., actually, really; and boosters, e.g. very, quite), and the use of cognitive attitude verbs (e.g., I believe, I think) and the first person plural pronoun ‘we’. Though product studies reviewed above have clear implications for training translators and interpreters, it is noted that most researchers have not explained these implications. This pedagogy-related gap does not exist, for instance, in the applied linguistics studies addressing similar linguistic and pragmatic features. Therefore, due attention should be paid to addressing this neglected issue in future research reports on translators and interpreting product studies.

6.4 Researching Prosodic Features in Interpreter Performance A few interpreting product studies have researched prosodic features (speech rate, pauses, intonation, etc.) in interpreter performance. Simultaneous interpreting is particularly regarded as a rich field for studying these features. Prosody plays an integral role in oral communication and in evaluating interpreter performance. Therefore, there is a need for fostering interpreter prosodic performance. Yenkimaleki and van Heuven (2018), who found positive effects for the prosodic feature awareness training on trainees’ English-Farsi interpreting performance, conclude that: [P]rosody awareness training contributes substantially and significantly to the quality of interpreting from foreign into native language by native speakers of Farsi. The effects of prosody training will differ for other native–foreign language pairs, depending on the linguistic and phonetic similarity of the prosodic systems involved. The word and sentence prosody of English and Farsi would diverge more from one another than, for instance, German and English, but not as much as English and French. (p. 95)

Understanding interpreter prosodic training needs requires identifying their prosodic performance weaknesses and strengths. Despite the importance of researching prosodic features in interpreter performance, a few studies have looked at such features. Ahrens (2005) attributes the paucity of research on this area to the difficulties involved in analyzing prosodic features:

120

6 Translation/Interpreting Product Research

Approaches to analyses and methodology as well as definitions of prosodic phenomena are as diverse as the number of studies. This is not only the case in studies on prosody in SI [simultaneous interpreting], but also in studies on prosody in general … Purely auditive analyses are subjective, purely automatized speech processing is error-sensitive …. For a long time, the processing of audio and video data required very powerful computer resources … Transcribing and analysing audio and video data is extremely time-consuming … Recording professional material in authentic settings is difficult and requires the consent of all parties involved. … The scope and objective of the study requires certain quality standards for the recordings … Any transcription provides a selection of all phenomena comprised in the recordings … There are no generally accepted conventions of transcription for prosodic elements … Transcribing prosodic phenomena is difficult since they vary a lot. (pp. 2–3)

Such methodological complexities do seem to have negatively influenced researchers’ interest in researching prosody in interpreter performance. Though the early research on this area dates back to the late 1960s and 1970s (e.g., Barik 1973), the relevant studies published in the past 40 years occurred infrequently. These include, for instance, the studies reported by Shlesinger (1994), Ahrens (2004), Nafá Waasaf (2007), Martellini (2013), and Bakti and Bóna (2014). The small number of participants in these studies is noteworthy. For example, the participants in each of the studies conducted by Ahrens (2004) and Martellini (2013) were six professional interpreters. The prosodic features investigated in these studies are pauses, intonation, speech rate and segmentation, accentuation and stress, and fundamental frequency (Ahrens 2005). Martellini (2013) analyzed the following prosodic features in six professional interpreters’ performance: speech rate, filled and unfilled pauses, syllable lengthening, intonation, and prominence. Below are some of the results revealed by Martellini’s study: [T]he six interpreters produced on average fewer tone units (715) than the original speaker (840). … The interpreters’ intonation was found to be rich in level boundary tones (TTs: 63% vs. ST: 33%), that is to say an inconclusive intonation caused by the fact that interpreters do not produce the content and have to wait for new material. … TTs were found to be more stressed than the ST: 218 words on average versus 187 words in the ST. …The speech rate was confirmed as being lower in TTs than in ST, partly because the interpreters had to wait for new material and also as a result of the application of SI strategies, i.e. condensation, segmentation and reformulation, which led them to produce a lower number of words. Fewer pauses were found in the TTs yet their duration was greater than in the ST. … Therefore they might be fewer than those appearing in the ST, but their greater duration shows the interpreter’s processing phase taking place while they are produced. (pp. 74–76)

As noted in the above interesting results part, this type of research reveals very important insights that can be helpful in identifying the prosodic features to be improved in interpreter performance.

6.5 Conclusion In this chapter, the author has discussed the three main areas of translation and interpreting product research. Table (6.1) gives a summary of the issues covered so far in the studies representing these three areas. As has been noted, not many

6.5 Conclusion

121

Table 6.1 Overview of the research areas and issues in translation and interpreting product studies Research area

Issues researched so far

Translation and interpreting quality

• Interpreting quality studies (e.g., accuracy of legal and medical interpreting, accuracy of consecutive interpreting versus simultaneous interpreting) • Translation quality studies (e.g., errors associated with source text difficulty levels, error types)

Linguistic and pragmatic features in translated • Linguistic features (e.g., cohesion devices, and interpreted texts lexical choices lexical density, lexical dis-/similarity in ST & TT) • Pragmatic features (e.g., explicitation in translation, hedges, modals, intensifiers, cognitive attitude verbs) Prosodic features in interpreting

• Pauses, intonation, speech rate and segmentation, accentuation and stress, and fundamental frequency

developments have occurred in translation and interpreting product research. Though the product studies occurred at an earlier stage compared to the other translator and interpreter education research types, these other types have seen many greater developments in the past two decades. There is a need, therefore, for brining about major developments in translation and interpreting product research. The applied linguistics field is full of research ideas and topics, which have not been experimented yet in either translation or interpreting research.

References Ahrens, B. 2005. Analysing prosody in simultaneous interpreting: Difficulties and possible solutions. The Interpreters’ Newsletter 13: 1–14. Ahrens, B. 2004. Prosodie beim Simultandolmetschen. Frankfurt: Peter Lang. Alasmri, Ibrahim, and Haidee Kruger. 2018. Conjunctive markers in translation from English to Arabic: A corpus-based study. Perspectives 26 (5): 767–788. https://doi.org/10.1080/0907676X. 2018.1425463. Arhire, M. 2017. Cohesive devices in translator training: A study based on a Romanian translational learner corpus. Meta 62 (1): 155–177. https://doi.org/10.7202/1040471ar. Bakti, M., and J. Bóna. 2014. Source language-related erroneous stress placement in the target language output of simultaneous interpreters. Interpreting International Journal of Research and Practice in Interpreting 16 (1): 34–48. https://doi.org/10.1075/intp.16.1.03bak. Barik, H.C. 1973. Simultaneous interpretation: Temporal and quantitative data. Language and Speech 16: 237–270. Barik, H.C. 1971. A description of various types of omissions, additions and errors of translation encountered in simultaneous interpretation. Meta 16 (4): 199–210. https://doi.org/10.7202/001 972ar.

122

6 Translation/Interpreting Product Research

Berk-Seligson, S. 1999. The impact of court interpreting on the coerciveness of leading questions. Forensic Linguistics 6 (1): 30–56. Burn, J.A., and I. Crezee. 2017. That is not the question i put to you, officer: An analysis of student legal interpreting errors. International Journal of Interpreter Education 9 (1): 40–56. Campbell, S. (1999). A cognitive approach to source text difficulty in translation. Target: International Journal of Translation Studies 11.1:33–63. Cox, E., and H. Salaets. 2019. Accuracy: Omissions in consecutive versus simultaneous interpreting. International Journal of Interpreter Education 11 (2): 1–19. Dam, H.V. 1998. Lexical similarity vs lexical dissimilarity in consecutive interpreting. The Translator 4 (1): 49–68. https://doi.org/10.1080/13556509.1998.10799006. Dong, D., and Y. Lan. 2010. Textual Competence and the use of cohesion devices in translating into a second language. The Interpreter and Translator Trainer 4 (1): 47–88. https://doi.org/10. 1080/1750399X.2010.10798797. Eades, D. 2011. Translating english modal expressions: An Arab translator trainee’s perspective. Babel 57 (3): 283–304. https://doi.org/10.1075/babel.57.3.03ead. Farghal, M., and A. Shunnaq. 1992. Major problems in students’ translations of English legal texts into Arabic. Babel 38 (4): 203–210. https://doi.org/10.1075/babel.38.4.03far. Flores, G., M.B. Laws, S.J. Mayo, B. Zuckerman, M. Abreu, L. Medina, and E.J. Hardt. 2003. Errors in medical interpretation and their potential clinical consequences in pediatric encounters. Pediatrics 111: 6–14. https://doi.org/10.1542/peds.111.1.6. Gile, D. 2001. Consecutive v. simultaneous: Which is more accurate? Tsuuyakkukenkyuu Interpretation Studies 1: 8–20. Hale, S., and S. Campbell. 2002. The interaction between text difficulty and translation accuracy. Babel 48 (1): 14–33. Hale, S., Goodman-Delahunty, J., and Martschuk, N. (2019). Interpreter performance in police interviews. Differences between trained interpreters and untrained bilinguals. The Interpreter and Translator Trainer, 13(2):107–131. https://doi.org/10.1080/1750399x.2018.1541649. Hu, K., and Meng, L. (2018) Gender differences in Chinese-English press conference interpreting, Perspectives 26(1):117–134. https://doi.org/10.1080/0907676x.2017.1337209. Kujamäki, M. (2019). Source text influence in student translation: Results of a longitudinal study. The Interpreter and Translator Trainer 13(4):390–407. https://doi.org/10.1080/1750399x.2019. 1615166. Kopczy´nski, A. (1980). Conference interpreting: Some linguistic and communicative problems. Poznan´: A. Mickiewicz University Press. Lee, J. 2009. Interpreting inexplicit language during courtroom examination. Applied Linguistics 30 (1): 93–114. Lu, X. (2018). Propositional information loss in English-to-Chinese simultaneous conference interpreting. Babel, 64(5–6):792–818. https://doi.org/10.1075/babel.00070.lu. Lv, Q., and J. Liang. 2019. Is consecutive interpreting easier thansimultaneous interpreting? – a corpus-based study of lexical simplification in interpretation. Perspectives 27 (1): 91–106. https:// doi.org/10.1080/0907676X.2018.1498531. Määttä, S.K. (2018). Accuracy in telephone interpreting: The case of French as a lingua franca in Finland. The Interpreters’ Newsletter, 23:1–17. Magnifico, C., and D. Defrancq. 2017. Hedges in conference interpreting: The role of gender. Interpreting 19 (1): 21–46. https://doi.org/10.1075/intp.19.1.02mag. Martellini, S. 2013. Prosody in simultaneous interpretation: A case study for the German-Italian language pair. The Interpreters’ Newsletter 18: 61–79. Nafá Waasaf, M.L. 2007. Intonation and the structural organisation of texts in simultaneous interpreting. Interpreting 9 (2): 177–198. Nord, C. 1997. Text analysis in translation: Theory, methodology and didactic application of a model for translation-oriented text analysis. Amsterdam: Rodopi. Ortiz-Boix, C., and A. Matamala. 2017. Assessing the quality of post-edited wildlife documentaries. Perspectives 25 (4): 571–593. https://doi.org/10.1080/0907676X.2016.1245763.

References

123

Ouyang, Q. 2018. Assessing meaning-dimension quality in consecutive interpreting training. Perspectives 26 (2): 196–213. https://doi.org/10.1080/0907676X.2017.1369552. Saeed, A. 2012. Difficulties Arab translation trainees encounter when translating high frequency idioms. Babel 58 (2): 181–204. https://doi.org/10.1075/babel.58.2.04sae. Swabey, L., Nicodemus, B., Taylor, M., Gile, D. (2016). Lexical decisions and related cognitive issues in spoken and signed language interpreting: A case study of Obama’s inaugural address. Interpreting 18(1):34–56. https://doi.org/10.1075/intp.18.1.02swa. Shlesinger M. (1994). Intonation in the production and perception of simulta-neous interpretation”, in S. Lambert, B. Moser-Mercer (eds) Bridging the Gap: Empirical Research in Simultaneous Interpretation, 225–236. Amsterdam/Philadelphia: John Benjamins. Shuttleworth, Mark and Cowie, Moira (1997). Dictionary of translation studies. Manchester: St. Jerome. Teng, W., J.A. Burn, and I.H.M. Crezee. 2018. I’m asking you again! Chinese student interpreters’ performance when interpreting declaratives with tag questions in the legal interpreting classroom. Perspectives 26 (5): 745–766. https://doi.org/10.1080/0907676X.2018.1444071. Tercedor, M. 2010. Cognates as lexical choices in translation: Interference in space-constrained environments. Target 22 (2): 177–193. https://doi.org/10.1075/target.22.2.01ter. Vesterager, A.K. 2017. Explicitation in legal translation: A study of Spanish-into-Danish translation of judgments. The Journal of Specialised Translation 27: 104–123. Yenkimaleki, M., and V.J. van Heuven. 2018. The effect of teaching prosody awareness on interpreting performance: An experimental study of consecutive interpreting from English into Farsi. Perspectives 26 (1): 84–99. https://doi.org/10.1080/0907676X.2017.1315824.

Chapter 7

Researching Professional Translator/Interpreter Experiences and Roles

Abstract In this chapter, the author highlights the research dealing with professional translator and interpreter experiences and roles. This research type explores the dimensions related to professional translator and interpreter perceived competences, uses of particular work tools, and their work difficulties and roles. The author provides an overview of the following four areas of this research: (a) translator and interpreter use of technology; (b) correlates of translator/interpreter competence; (c) profiling translator practices; and (d) profiling interpreter experiences and roles in different fields. In the sections covering these areas, the author reviews the studies representing each, and explains the research methods used. It is generally noted that the larger part of this research type has been concerned with exploring interpreter practices and roles. The chapter ends with highlighting the gaps that remain to be addressed in professional translator and interpreter experience research. Keywords Translation research · Interpreting research · Workplace research · Interpreter roles · Community interpreting · Public service interpreting · Translation technology · Machine translation · Workplace research

7.1 Introduction: Methodological Approaches Research on the professional translator and interpreter experiences and roles is an area of utmost importance to their training. ‘Workplace research’ is another name that could be used to describe the studies dealing with this area. In this chapter, the author discusses research addressing translator and interpreter perceptions, tasks, work difficulties, competences, uses of particular work tools, and the issues related to their workplace roles. Research on these issues can provide translator and interpreter educators with very important training implications. Talking about the importance of this research type to translator training, Sakamoto (2017) states that ‘to offer translation education that prepares our students adequately for their future professional career, it is important to recognize the different subcultures of translation, particularly those of professional translators’ (p. 271). Ehrensberger-Dow (2014) also views that ‘workplace studies can be motivated by a pedagogical interest in

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_7

125

126

7 Researching Professional Translator/Interpreter Experiences …

knowing what professional translators do, in order to better prepare students for their future profession’ (p. 380). This chapter discusses and highlights the research approaches and methods used in four main areas of professional translator and interpreter experience research. These four areas are – – – –

Translator/interpreter use of technology, Correlates of translator/interpreter competence, Profiling translator practices, and Profiling interpreter practices and roles.

The author discusses the research issues covered in each area, and highlights representative studies of each, and the research methods used. As will be noted in the sections below, the majority of the studies belonging to this research type are characterized by their surveying nature. The readers will find that questionnaires and interviews have been used heavily in these studies. Besides, a large number of professional translator and interpreter experiences have combined the two data sources. The readers will also note that cross-cultural surveys have been used in some studies concerned with comparing the translation/interpreting practices in particular countries or world regions. Other data sources used include: document review or analysis, psychological scales, and audio- and video-recorded performance data. The recorded performance data used in these studies is authentic data taken from real-life translation/interpreting situations. In analyzing this performance data, researchers depended mainly on the discourse analysis approach. Given that this research type is mainly related to workplace practices, collaboration with the language service providers for which translators and interpreters work is an essential issue. There are also some ethical issues that should be considered in conducting and reporting this research. As Ehrensberger-Dow (2014) states: [T]he choice of LSPs [language service providers] is crucial to the success of workplace research: they should be interested enough and large enough to handle the demands on staff resources that involvement in such a project inevitably entail. Before the project begins, researchers should spend time on the LSP premises in order to better anticipate and find solutions for possible problems and complications. … For ethical reasons, participation by individual translators should be voluntary, and their anonymity must be guaranteed by removing all identifying information from data for analyses. Any data or examples used for publication or educational purposes should be modified to ensure anonymity of the participating translators and to protect the LSPs from reputational risk. Confidentiality issues cannot be underestimated, and protocols should be worked out well before data collection begins. (p. 379)

7.2 Translator/Interpreter Use of Technology A number of studies have dealt with how professional translators and interpreters use and perceive technology in their workplaces. A main difference can be noted in the studies investigating the use of technology in the translation and interpreting fields.

7.2 Translator/Interpreter Use of Technology

127

While translation studies have looked mainly at translators’ use and perceptions of technological tools in completing their workplace tasks, interpreting studies have been concerned with examining interpreters’ evaluation of and their performance in technology-mediated environments. Many of the studies in the two categories of studies are characterized by combining questionnaire and interview data, and collecting data from a large number of participants. As may be expected, the focus (and definitely the findings) of the early translation studies of this type is different from the recent ones. This can be noted in the studies examining translators’ use of CAT tools in different decades. For example, Fulford and Granell-Zafra (2005) investigated 591 UK-based freelance translators’ familiarity with and use of translation technologies. They found that 75% of their respondents were not familiar with machine translation systems. Lagoudaki (2006) also collected questionnaire data from 699 translation professionals (i.e., translators, terminologists, reviewers, etc.) from 54 countries about their use of translation memory software and attitudes toward it. These two early studies can be compared to the one reported by Christensen and Schjoldager (2017), who surveyed Danish translation service providers’ use of CAT tools. They collected data using a questionnaire from 29 translation service provider companies in Denmark. The questionnaire included background (mainly closed) questions, and other main questions which asked respondents about how often they use CAT tools (i.e., machine translation, translation memory software, and terminology management systems) and in which combinations, and their perceived impact of using these tools. As noted, researchers moved from asking participants about their familiarity with a limited number of CAT tools in the early studies (e.g., Fulford and Granell-Zafra 2005) to examining the frequency of their use of a wider number of tools in the more recent ones (e.g., Christensen and Schjoldager 2017). Collecting observational data from a large Danish language service provider, Bundgaard and Christensen (2019) investigated seven professional translators’ use of consultation resources while editing machine translated technical texts. Using computer-keystroke logs, they found these translators made use of the following seven resources in their editing of the text (concordance search, Termbase search, Google search, Webpage search, online dictionaries, offline dictionaries, and reference texts). Of all these consultation resources, concordance search was the translators’ preferred one. Translators’ use of machine translation software has also received reasonable research attention. In their cross-cultural study, Gaspari et al. (2015) surveyed the machine translation competencies of 438 translation professionals in 21 countries. The sample of their study included freelance translators, language service providers, translator trainers, and academics. Their 28-item questionnaire was designed to collect data about the respondents’ sociodemographic information, their approaches to evaluating the quality of human/professional translation, the language combinations for machine translation, their views on machine translation quality, satisfaction with machine translation evaluation, and machine translation post-editing scenarios. Moorkens et al. (2018) also investigated translators’ perceptions of machine translation use in translating literary texts with their unique characteristics. They engaged six professional translators in translating six literary texts from English into Catalan

128

7 Researching Professional Translator/Interpreter Experiences …

under three conditions: translating from scratch, post-editing neural machine translation, and post-editing statistical machine translation. The data of this study was collected using questionnaires and interviews before and after the translation tasks. In another study, Rossi and Chevrot (2019) investigated the European Commission professional translators’ uses and perceptions of machine translation. Their study consisted of two phases. In the first stage, they depended on a 3-week research stay in which they used daily observations and 10 semi-directed interviews in the French language department of the European Commission to explore translators’ uses and perceptions of machine translation and their post-editing practices. The ethnographic data generated from this phase was used to develop a survey (the second phase). The survey was completed by 89 respondents from 15 language departments at the European Commission. It includes 28 questions covering respondents’ background information, their attitudes toward and evaluation of using machine translation in their workplace, the frequency of using machine translation software, the activities or tasks they enjoy doing with or without using machine translation, and their expertise and skills in using machine translation and CAT tools. On the other hand, some other studies researched translators’ perceptions and purposes of using resourcing tools such as Wikipedia and translation/translator platforms. Alonso (2015) surveyed professional translators’ use of Wikipedia and other technological sources through focus group interviews and an online questionnaire. The questionnaire was completed by 65 respondents and it focused on finding out how familiar they were with the Internet and technological resourcing tools they used (e.g., Google, online dictionaries, corpora, terminology databases, blogs, mobile phones, etc.), and on eliciting the respondents’ use of Wikipedia and the documentational, terminological/lexicographical and visual resourcing purposes of such use (e.g., finding information about the topic, finding the meaning of cultural references and terms, viewing the images of particular terms, creating a glossary or parallel text reference, etc.). On the other hand, five professional translators took part in her two focus group interview sessions which were organized as follows: The focus groups … consisted of a fluent and semi-structured exchange of views among participants, conducted in a relaxed environment. Firstly, participants were asked about their background, specialisations, experience, languages, etc. Then, each of them described their way of approaching a translation brief: the tools they used, how they organised their work, the needs they had during the translation process, their relationships with other relevant agents (project managers, clients, other translators), etc. Finally, participants were asked whether they used Wikipedia, for what purposes and what perception they had of it. (p. 91)

In the Austrian context, Heinisch and Iacono (2019) explored professional translators’ attitudes toward and uses of translator platforms. They differentiate between translator platform and translation platform as follows: [T]he term ‘translation platform’ … in its broadest sense to refer[s] to a web-based technological system that facilitates, customises and/or automates the translation workflow and/or translation management within a company. Translator platform, on the other hand, can be loosely described as a web-based technological system that facilitates the work of translators by providing a marketplace as well as translation-related tools and/or resources. The first system aims at facilitating translation workflows (of companies), whereas the latter

7.2 Translator/Interpreter Use of Technology

129

aims at facilitating the work of (freelance) translators. While translation platforms focus on the translation process and its technological component, including automation of workflows, translator platforms primarily focus on the person of (freelance) translators and their needs…. To sum up, translation platforms are characterised by process orientation, whereas translator platforms are focusing on people. Both enable interaction among translators or between translators and clients. Translator platforms pay attention to the marketplace and/or mutual exchange of information, knowledge or resources. (pp. 64–67)

In their study, Heinisch and Iacono used semi-structured interviews to explore professional translators’ uses and expectations of translator platforms. They individually interviewed eight professional translators for about 70 min. In the interviews, the translators were asked about the translation technologies and language resources they used, and were discussed about translation and translator platforms but without explaining the meaning of these terms to them. The interview data revealed that these professional translators use marketplace translator platforms, such as TranslatorsCafé and ProZ.com, and that they used them for a number of purposes (for example, searching for terminology and networking). With the increasing virtual communication within translation companies, Sakamoto and Foedisch (2017) tried to fill in an important research gap by exploring the dynamic of feedback exchanged electronically between language service providers’ project managers and freelance translators. They focused particularly on the perceived value and views of feedback for both groups. Compared to the above translation studies, the studies dealing with interpreters’ use of technology are fewer. Some researchers have approached this issue from a broader perfective. For example, Mellinger and Hanson (2018) surveyed community, conference, and medical interpreters’ attitudes toward technology use and the relationship of these views with their own perceptions of interpreter roles and communication apprehension. Other researchers have addressed this issue from a narrower perspective which is exploring interpreters’ work in technology-mediated environments. A study belonging to this research strand was conducted in the Swedish context, where Warnicke and Plejert (2016) examined the interpreter’s positioning in a video relay interpreting (VRI) service providing bimodal mediation between individuals using sign language interpreting and other ones using spoken Swedish. They define positioning as: The ways in which interpreters orient themselves to the contingencies of the setting on a moment-by-moment basis, in relation to the impact of technology, participants’ knowledge asymmetries (e.g., prior experience of VRI), their physical separation, and the need for two arenas (visual and auditive). (p. 198)

Warnicke and Plejert analyzed nine excerpts from two video relay interpreting calls and focused specifically on aspects related to the interpreter’s positioning such as briefing, temporarily losing sound or image, informing users of extra-linguistic items, and recognizing the need to conclude the interaction. In a more recent study, Braun (2018) looked at legal interpreters’ perceptions of video-mediated interpreting by using a questionnaire and semi-structured interviews. The questionnaire was used to elicit interpreters’ experiences and perceptions

130

7 Researching Professional Translator/Interpreter Experiences …

of video-mediated interpreting and their satisfaction with it. This questionnaire was completed by 84 working in England and 82 interpreters in other 26 countries. As for the semi-structured interviews, these were conducted with ten legal interpreters who were based in England and they focused on their experiences with different configurations of video-mediated interpreting in the legal sector. Braun also supplemented the questionnaire and interview with site visits and reviewing the relevant legal documents. Her data was organized in the following categories: interpreters’ satisfaction with video-mediated interpreting, their perceptions of the technical quality of videomediated interpreting, their interaction with the videoconference equipment, and the distribution and spatial organization.

7.3 Correlates of Translator/Interpreter Competence A group of studies have explored the correlates of professional translator/interpreter competence. These studies aimed at identifying the factors facilitating or hindering translator/interpreter effective performance. Translator/interpreter sociodemographic characteristics are among the factors examined in these studies. For example, Diamond et al. (2012) surveyed the factors associated with dual-role interpreters’ English language competence and medical terminology knowledge. Dualrole interpreters are ad hoc ones, whose main position involves performing either administrative or clinical roles but their bilingual language skills enable them to perform the role of an interpreter as a secondary position (Wilson-Stronks and Galvez 2007). The factors explored in Diamond et al.’s study included age, gender, education background, clinical job, and the interpreting training received. They related their factors with the language competence and medical terminology test scores obtained by 387 dual-role interpreters in the USA. Psychological dimensions have received particular attention in professional translator and interpreter research. Bontempo and Napier (2011) hypothesize that ‘variance in interpreter performance is dependent on factors of both general cognitive ability and personality’ (p. 85). Likewise, Atkinson (2012) views that psychological dimensions represent ‘an important link in the model of the relationship between job ability and motivation, job constraints, and subsequent job performance in translation’ (p. I). In a correlational study conducted with 110 accredited sign language interpreters in Australia, Bontempo, and Napier (2011) explored the relationship of their self-perceived competence with self-efficacy, goal orientation, and negative affectivity (i.e., anxiety and neuroticism). Their study indicates the important role played by emotional stability in predicting interpreter performance. Atkinson (2012) examined the psychological dimensions associated with freelancers’ work success. He collected survey data from 43 professional translators working in New Zealand, and 92 ones from other various countries, and conducted interviews with 10 of them. Atkinson used a questionnaire to measure the translators’ job satisfaction, their professional focus (i.e., the extent to which they are willing to work as translators), and their self-ability beliefs and attributions for the success of negative

7.3 Correlates of Translator/Interpreter Competence

131

outcomes. The interviews Atkinson used focused on similar issues. Schwenke et al. (2014) also examined the association between sign language interpreters’ burnout and their perfectionism and perceived stress, and their coping resources. Another psychological dimension addressed in this research type is translator/interpreter job satisfaction. The few available studies on this issue seem to have been conducted only in the interpreting field. For example, Martikainen et al. (2018) surveyed the job satisfaction of sign language interpreters and the main working conditions and external factors influencing it in the Finnish context. They used an online questionnaire that was completed by 135 sign language interpreters in Finland. Gender-related differences in translator and interpreter work perceptions have also received little research attention. Gentile (2018) also investigated the differences in female and male conference interpreter attitudes toward their profession and perceptions of their professional status. Gentile’s study used a survey that was completed by 805 interpreters working in different countries.

7.4 Profiling Translator Practices and Roles The main research strand in professional translator/interpreter experience studies is concerned with profiling their practices and roles. The importance of this research strand lies in identifying professional needs and practices as a basis for translator/interpreter training. Talking about such importance in the journalistic translation field, Li (2006) states that: [S]tudents have often felt underprepared in journalistic translation even after taking some related courses… one of the major reasons accountable for this is the gap between institutional translator training and the real world of professional translation, which, in the context of journalistic translation, manifests itself as the difference in translation methods taught in translation programs and used in professional practice. (p. 611)

Not much research profiling translator practices has been published. Very few studies have attempted to profile translation practices in a particular country or region. A good example of such studies is the one reported by Kafi et al. (2018), who looked at the challenges of establishing a translation profession in Iran. They conducted in-depth interviews with 11 participants (five translation researchers, three translators, two translation agency managers, and a head of a publishing house). The interview data collected in this study was organized into the five main themes: (a) administrative issues: lack of a strong translation guild, absence of a unified code of ethics and market entrance criteria, and the popularity of unofficial translation services; (b) issues of social status: non-recognition of translators and the dominance of misconceptions about translation; (c) issues related to translation agents: disunity among translation agents, and unfamiliarity with basic rights and duties, and ghostwriting translation (i.e., publishing a translated work under the name of another author with no adequate knowledge of the source language); (d) training issues: outdated syllabus, and neglecting the role of experienced translators; and

132

7 Researching Professional Translator/Interpreter Experiences …

(e) economic issues: (economic conditions and pricing imbalance). Based on their results, Kafi and his colleagues provided some suggestions for overcoming these difficulties and fostering the translation profession in Iran. Other studies profiling professional translation practices have focused on investigating translator work in some fields rather than geographical regions. These fields include: journalism translation, medical translation, revision of translated works, and translation in advertising agencies. Li (2006), for instance, surveyed the methods Hong Kong newspapers employed in the Chinese translation/adaptation of the English international news. He spent three consecutive days recording notes about the translation methods used in four major Hong Kong newspapers. It was found that these newspapers used three main translation methods: complete translation, selective transadaptation, and news staff reporting. Li compared these methods to the journalism translation training methods used at Hong Kong universities. A question remains to be answered in the scientific translation field is who is more efficient in performing its tasks: professional translators or scientific experts with a good linguistic background? Muñoz Miquel (2018), for instance, highlights the need for examining the medical translation performance differences between professional translators and subject-matter experts: In the literature on medical translation, the question as to who translates (or should translate) medical texts has been largely discussed on the basis of the traditional linguists versus subject-matter experts opposition. Both scholars and professional translators have attempted to determine medical translators’ profile by making statements about the characteristics of translators with a linguistic background and those of translators with a scientific-medical one. These statements are generally based on intuition or personal experience rather than on empirical data which can be used to back up any kind of evaluation that may be made. (p. 24)

The studies reported by Nisbeth Brøgger (2017) and Muñoz Miquel (2018) are of the few ones dealing with subject expert versus professional translator views on medical translation. Nisbeth Brøgger (2017) conducted interviews with two focus groups of five professional translators and five pharmacists to investigate translator conceptions of patient information leaflets translation and comprehensibility to identify the causes of difficulties in translating medical texts. Her interviews focused on three main issues: how the translators approach the task of translating patient information leaflets and perceive its purposes, the translators’ perceptions of the receiver characteristics and needs, and their views on the role and freedom of the translator in the process of translating patient information leaflets. Muñoz Miquel (2018) also surveyed the profiles and practices of medical translators with a linguistic background and a scientific-medical one in Spain. She depended on a 49-item survey, which was completed by 189 translators with the two backgrounds. Based on the respondents’ answers to this questionnaire, Muñoz Miquel provided a comparative profile of the medical translators belonging to the two backgrounds. This profile consisted of their background information, sociodemographic information (e.g., academic qualification and years of experience) and socio-professional aspects, including: reasons for entering the medical translation field, percentage of

7.4 Profiling Translator Practices and Roles

133

work entailing medical translation and combination with other professional activities, the other types of translation performed, types of clients, the text genres translated (e.g., case reports, medical records, clinical guidelines, textbooks, medical reports, and manuals), the main difficulties encountered (e.g., translating cultural asymmetries, using phraseology suitable to target readers, understanding highly specialized concepts, and choosing appropriate terminology), the documentation resources used, the aims of self-taught training, and the collaboration with other professionals to review their medical translations. On the other hand, Vandal-Sirois (2016) investigated the translation practices in advertising agencies. He used non-participant direct observations and semistructured interviews with two case studies. In his study, Vandal-Sirois focused particularly on profiling the advertising translators’ duties, responsibilities, and their workplace professional work environment and relationships. The data collected revealed that in advertising agencies the translator acts as a multitasking production partner and an intercultural mediator in social media. Based on his data, Vandal-Sirois concludes that: Our case studies demonstrate that advertising adaptation assignments go far beyond linguistic preoccupations, and that the translator acts as a multitasking cultural agent. In our first case study, the translator is involved in the entire process of producing aTV spot, from the casting to collaborating with the editor (as opposed to simply translating the on-screen text). In the second case study, after adapting corporate publications for social media, the translator is allowed to create French responses in the name of the brand, since he knows the client and his product as well as the creative team that created the original English messages. (p. 543)

Marin-Lacarta and Vargas-Urpi (2018) explored how the process of proofreading and revising translated works is completed in a non-profit digital publishing house. They focused mainly on the revision process workflow, the negotiation of revision decisions and the interactions among revisers and translators, and the differences between the various revision stages. To study these issues, Marin-Lacarta and Vargas-Urpi conducted in-depth interviews with 16 participants (translators, revisers, a proofreader, a cover designer); and collected other data types, including participant reflective diaries, fieldnotes, e-mail correspondence, and translation drafts.

7.5 Profiling Interpreter Practices and Roles Compared to its translation counterpart, much more research has tried to profile interpreter practices and roles. This can be attributed to the more complex and varied nature of interpreter roles in their workplace situations. Apart from interpreting, interpreters normally play other roles while performing their tasks in facilitating the communication process. Arumí Ribas (2017) summarizes these perceived roles as follows:

134

7 Researching Professional Translator/Interpreter Experiences …

[T]here is great ambiguity regarding the terms employed to define the roles, profiles and scope of the individuals who act as intermediaries in the communication process in public services. The main confusion surrounds the terms of intercultural mediator and interpreter…. If we review academic discussions in recent years regarding the role of the interpreter, we find a continuum ranging from the notion of the interpreter as a neutral and invisible figure right through to intervention, including half-way positions which refer to the interpreter as an active participant, as a cultural broker or gatekeeper (Davidson 2000). (pp. 195–196)

The research conducted on professional interpreter practices has been mainly concerned with profiling such practices, exploring interpreter roles, and examining their work difficulties. The dominant trend in such research is to focus only on interpreters working in one interpreting field (for example, healthcare interpreting, court interpreting, or conference interpreting). On the other hand, a few studies have investigated the practices and roles of a group of public service or community interpreters working in hospitals, courtrooms, government offices, and educational institutions. The three studies reported by Ortega Herráez, Abril, and Martin (2009), and VargasUrpi (2016, 2019) are examples of the little research following this approach. Ortega Herráez et al. (2009) surveyed the work, roles and perceptions of a group of community interpreters working in Spanish hospitals, law courts, and social service, civil defense, and security organizations. They used a questionnaire and structured interviews to explore these interpreters’ role perception and their ‘adaptation of language register, cultural explanations, expansion and omission of information, the relation with clients, and specialized terminology, amongst other aspects’ (p. 149). On the other hand, Vargas-Urpi (2016) interviewed 20 public service interpreters who were working in healthcare court, telephone, education, and social services interpreting. She focused on the problems and difficulties they usually encounter at word and discourse levels, and the strategies they use to overcome them. In a later study, Vargas-Urpi (2019) investigated the difficulties encountered in public service interpreting by interviewing five Chinese–Spanish/Catalan interpreters who had varied experiences in health care, social service, police, and court interpreting. With regard to the research addressing the interpreting practices in one interpreting field, the exemplary studies representing it are reviewed in the next subsections. The first subsection is devoted to conference interpreting research, whereas the other remaining subsections cover the research addressing the professional interpreter practices in a number of public service or community interpreting fields, specifically: health care, court, police interview, war-related conditions, telephone, and sign language interpreting fields. The interpreters were the main participants in the studies highlighted in these subsections, and in many of these they were the only participants indeed. In addition to interpreters, some studies have also collected data from other stakeholders such as service providers (e.g., hospital staff) and other interaction participants (e.g., doctors or police officers).

7.5 Profiling Interpreter Practices and Roles

135

7.5.1 Conference Interpreting Not many studies have documented conference interpreter work experiences. The few available studies in this regard have dealt with two main issues: interpreter real-life work experiences and perceptions, and their conference preparation strategies. The two studies reported by Han (2016) and Seeber et al. (2019) explored conference interpreter real-life work experiences. Han (2016) tried to provide a detailed account of the real-life interpreting practices of conference interpreting in China by using an online questionnaire. The questionnaire was completed by 140 English/Chinese conference interpreters, who were based in China. Based on his questionnaire data, Han summarized the following three main research findings: (a) Conference-related materials (mainly programmes and speakers’ scripts/notes) are often received late, leaving little preparation time; (b) Interpreters do a much wider variety of simultaneous interpreting tasks than previously thought, albeit with varying degrees of frequency; (c) Difficulties are felt to arise mainly from technical subject matter and terminology, speakers’ delivery (strong accent, speed), and lack of preparation. (p. 259)

As for Seeber et al.’s (2019) study, it explored interpreters’ attitudes toward providing video remote conference interpreting services during the 2014 FIFA World Cup. Specifically, they investigated the interpreters’ views on remote interpreting in general and in this international event in particular, their own experiences with remote conference interpreting, and how such experiences influenced their psychological and physiological well-being. This study used a mixed-methods approach by combining the following quantitative and qualitative data sources: pre-event questionnaire, mid-event interviews, post-event questionnaire, and one-week observation for collecting documentary and objective data about remote conference interpreting venue and setting. The researchers collected their data from a total of 81 interpreters. Based on the results of their study, they provided some suggestions for improving the key parameters of remote conference interpreting. On the other hand, the studies reported by Jiang (2013). Chang et al. (2018) were concerned with conference interpreter preparation strategies. Jiang (2013) looked at simultaneous interpreters’ use of glossaries. She used an online questionnaire that was completed by 500 interpreters most of them were members of the International Association of Conference Interpreters. This questionnaire was first piloted in a printed form with some interpreters at a UN conference. In Chang et al.’s (2018) study, they investigated how conference interpreters develop their domainspecific knowledge about unfamiliar topics before, during, and after the conferences in which they participate. They interviewed 10 Chinese-English interpreters about their preparation strategies for this type of conferences. The process of interviewing each interpreter was preceded by collecting their five latest conference programmes and analyzing the knowledge domains they covered, and developing the interview questions based on one representative conference to explore the interpreter’s acquisition process of the conference-related knowledge. The results of this study show that the 10 conference interpreters resorted to:

136

7 Researching Professional Translator/Interpreter Experiences …

[S]trategic preparation of unfamiliar topics: to facilitate comprehension and reformulation, interpreters make good use of conference documents and compile glossaries in which they organize the concepts and terminology specific to the conference. As they assimilate the language usage of the presenters and other participants during the conference, they use their analytical skills to manage any difficulties. Keeping in mind the aims of the event (e.g., commercial, scientific), as well as the profiles of the speakers and target audience, helps to optimize availability of relevant knowledge at short notice and continue updating it during the assignment. (p. 204)

7.5.2 Healthcare Interpreting Healthcare interpreting professional practices have perhaps received more research attention than any other interpreting field. Two main causes have likely stimulated this greater attention: the easier accessibility of healthcare interpreting situations and interpreters as compared to other settings such as court, police, and war-related condition interpreting ones, and the rich discoursal features in healthcare interpreting. Two main types of medical professional interpreter studies have been identified: survey studies and discourse analysis ones. The survey studies have focused on exploring medical interpreter roles and tasks. Using the job analysis approach, Swabeyi et al. (2016) surveyed the job tasks performed by designated healthcare interpreters (i.e., those working with deaf health professionals). Twenty-two designated healthcare interpreters responded to their survey which included questions about their work experience, certification, and training, types of work settings in which they acted as interpreters, and a list of 49 tasks they were asked to rate their importance for their work. In another study, Arumí Ribas (2017) conducted semi-structured interviews with 26 health service staff, 15 healthcare interpreters, and 9 health service managers to explore how these groups view the roles involved in healthcare interpreting in Catalonia, particularly their views on the interpreting versus mediation roles and how these roles overlap. Based on her data analysis, Arumí Ribas concludes that: Both healthcare service managers and staff as well as mediators agree that the following roles are characteristic of communication professionals in public services: (1) interpreting, (2) mediation if there is any cultural misunderstanding, (3) accompanying (inside or outside the healthcare system), (4) cultural orientation within the healthcare system (and outside it), and (5) translation and adaptation of written medical material. In addition to these roles, both healthcare staff and PSI [public service interpreting] professionals include that of diffusion of the services. Similarly, the managers mention that mediators are also associated with other tasks that go beyond their job description such as customised user support. (p. 201)

The discourse analysis studies of healthcare interpreter practices have depended on the audio-, video-, and eye-tracking-recorded observational data. Researchers have analyzed the discourse features in these studies to identify the interpreter roles and influence on doctor–patient interactions. Some of these studies have triangulated this observational data with another data type. An early study on the roles of healthcare interpreter was reported by Athorp and Downing (1996), who examined the

7.5 Profiling Interpreter Practices and Roles

137

influence of professional versus non-professional interpreter on doctor–patient interactions through comparing the distribution of the speakers’ turns. In another early study, Bolden (2000) looked at the role of medical interpreters in doctor–patient interactions by analyzing the interpreters’ involvement in medical history-taking during consultations. Bolden’s study depended on the video- and audio-recorded data of two interpreter-mediated interviews in a US hospital. Her study revealed that: Medical interpreters are found to share the physicians’ normative orientation to obtaining objectively formulated information about relevant biomedical aspects of patients’ conditions. Thus, far from being passive participants in the interaction, interpreters will often pursue issues they believe to be diagnostically relevant, just as they may choose to reject patients’ information offerings if they contain subjective accounts of their socio-psychological concerns. (p. 387)

In the studies conducted by Leanza (2005) and Vranjes et al. (2019), videorecorded data of the interpreter-medicated medical consultations was supplemented by some other data type. Leanza (2005) studied the various roles of the healthcare interpreter as perceived by physicians and interpreters and also based on analyzing these roles in video-recorded consultations in a pediatric outpatient clinic in Switzerland. She used these video-recorded consultations in her stimulated recall interviews with eight pediatrics residents to investigate their perceptions of the interpreter roles. Additionally, the same video-recorded consultations were used in her stimulated recall interviews with four interpreters. Leanza also provided extracts of consultations to explain the various roles played by the interpreters. Based on her data, Leanza proposed a typology of four roles of the healthcare interpreter: (a) a system agent who ‘transmits the dominant discourse, norms, and values to the patient’; (b) a community agent who plays the role of the informant and culture broker; (c) an integration agent who welcomes patients and provides orientation support to them outside consultation situations; and (d) a linguistic agent who ‘has to find the proper translation on the fly’ (pp. 186–187). Leanza concludes that ‘in the consultations, interpreters act mainly as linguistic agents and health system agents and rarely as community agents. This is consistent with the pediatricians’ view of the interpreter as mainly a translating machine’ (p. 167). In another study, Vranjes et al. (2019) tried to describe the interpreter role in the therapeutic talk between doctors and patients by combining video recording with eye-tracking data. They recorded a session of an interpreter-mediated therapeutic talk at a mental health institution. In analyzing their data, they focused on the conversation analysis of the interlocutors’ interactions, interpreter and therapist listener responses, and the verbal and nonverbal responses indicated by the head nods. Other issues investigated in the discourse analysis studies of healthcare interpreting practices include the role of the interpreter as a cultural broker and their visibility. Penn and Watermeyer (2014) examined the role of the interpreter as a cultural broker through analyzing 10 interactions that were video-recorded over a 2-month period in a Southern African child psychiatry clinic. They followed the conversation thematic analysis in examining this cultural broker role in the collected data. Based on their study results, Penn and Watermeyer provided the following implications for interpreter training:

138

7 Researching Professional Translator/Interpreter Experiences …

Training in the future might focus on the development of dyads and ultimately enhancing caregiver agency. An understanding of potential barriers to communication is critical and training of such partnerships may lie in multidimensional qualitative methods which enable an understanding of specific routines in the naturalistic context of the clinic and reinforce models of training which are team based, encourage self-reflection, and which promote patterns and resources which enable flexibility and trust. … Training in an ideal context of course would involve much more than training the cultural broker. The inclusion of uninterpreted segments and giving the cultural broker more space to add advice and commentary at times should not automatically be seen as problematic but should be evaluated in relation to the context in which it occurs. We look forward to the development of more contextually attuned training programmes for dialogue interpreting. (pp. 369–370)

Zhan and Zeng (2017) depended on observing and recording interpreted medical consultations to explore the visibility of the interpreter. The data they analyzed consisted of 29 interpreted medical consultations in which four interpreters took part. In their analysis of the data, Zhan and Zen focused on the turns taken by the interpreters to identify their text ownership, i.e., their ‘inclusion of personal ideas, or institutional knowledge or beliefs, in a turn or utterance’ (p. 99). They provided quantitative data of each of the four interpreters’ interpreted events, turns with total text ownership, and turns with partial text ownership. They also included discourse examples showing the interpreters’ attempts to expedite conclusion drawing, redirect turns, express their solidarity with the patients, and educate them about medical practice or hospital and health system arrangements.

7.5.3 Court Interpreting The little research addressing court interpreting practices has also focused on the interpreter roles. For example, Pöllabauer (2004) examined the interpreter roles in the context of asylum hearings. She used the discourse analysis approach to examining these roles in recorded authentic asylum hearings. Her study implies that ‘interpreters in asylum hearings frequently assume discrepant roles which may at times be determined by the perceived expectations of the officers in charge, and that these roles are not clear-cut.’ (p. 143). A main issue examined in court interpreting practice research is the interpreter neutrality or their influence on the hearing communicated. Ng (2016) examined the change in the court interpreter role as a result of initiating turns with the speaker when participating in the court proceedings. Ng’s study depended on the audio-recorded authentic courtroom data, which was taken from a trial in the High Court of Hong Kong. The recorded data was analyzed by identifying the total turns of the first prosecution witness and the interpreter. The interpreter-initiated turns in first prosecution witness’s examination were found to have the following purposes: seeking confirmation and clarification, coaching the witness, responding to the witness’s question, prompting the witness, informing the court of the need to finish an interrupted interpretation, and pointing out a speaker mistake. In light of the results of her study, Ng states that:

7.5 Profiling Interpreter Practices and Roles

139

As in any monolingual communication, problems of communication such as nonresponsive, ambiguous or unclear answers, do arise from time to time in interpreter-mediated interactions, and thus the need for clarifications is sometimes unavoidable. It is therefore unrealistic to suggest that interpreters should under no circumstances clarify with the speaker. However, clarifications by the interpreter with primary interlocutors not speaking each other’s language can be a very complicated issue. … [a]ny intervention by the interpreter, no matter how brief it may be, inevitably excludes the participation of the non-comprehending court actors, who may be left to wonder what is going on between the interpreter and the witness. This may also adversely impact on the evaluation of the competence of the interpreter and the trustworthiness of the witness as noted above. With this in mind, interpreter intervention such as prompting the witness or asking the witness for further information should be avoided where possible. It is therefore essential that student interpreters are taught when and how to intervene. (pp. 35–36)

In another relevant study, Defrancq and Verliefde (2017) examined the interpretermediated paternalistic interaction in a Belgian judge-centered courtroom where the judge-centered legal system follows the ‘inquisitorial proceedings, in which interaction between the parties is kept to a strict minimum. Cases are handled according to a set of conventions favoring long uninterrupted turns by the prosecutor, the solicitor and the judge, in which they all speak about the defendant, rather than address the defendant directly.’ (p. 209). They collected recorded data of the suspect’s hearing in which an experienced court interpreter provided consecutive interpreting. Based on their analysis of the audio-recorded, Defrancq and Verliefde conclude that: The paternalistic participation framework seems to prompt various strategies by the interpreter, leading her to disregard major aspects of the code of ethics she works by. First, she sets up a separate participation framework with the defendant as the addressee of the interpretation (the ‘interpreter’s dyad’), systematically using the deictic coordinates of this framework in presenting the court’s interaction. Second, she tends sometimes to position herself in the role of principal, arguably as a result of the dyad arrangement. Finally, though interpretation is required only for the defendant, the latter’s French is occasionally interpreted into Dutch for the court—sometimes at the interpreter’s own initiative, possibly to protect the interests of the defendant in response to a verbal challenge from the judge. (p. 209)

7.5.4 Police Investigative Interview Interpreting Police investigative interview interpreting is a field related to court interpreting in being concerned with the legal system, but it is different from it. The past two decades saw the publication of some monographs and edited volumes dealing particularly with police investigative interview interpreting (e.g., Berk-Seligson 2009; Mulayim et al. 2015; Nakane 2014). Those interpreting police interviews have to perform a number of interpreting tasks. Ortega Herráez and Foulquié Rubio (2008) explain these tasks as follows: The intervention of an interpreter is required in many scenarios other than just in detainees’ questioning: transcription-translation of tapped telephone conversations, interpreting for crime victims, translation-data analysis during police investigations, provision of information to people reporting a crime, etc. Given such a wide range of functions, it is clear that interpreters may find themselves in situations that conflict with what is supposed to be

140

7 Researching Professional Translator/Interpreter Experiences …

their prescribed role… [A]nd this creates numerous problems in aspects such as interpreter intervention, the interpreter’s role as cross-cultural and language mediator and the adequate provision of interpreting services (p. 123)

Some police interpreting practice studies have focused on surveying interpreter perceived roles. In the Spanish police context, Ortega Herráez and Foulquié Rubio (2008) used questionnaires and interviews to explore the views of service providers and interpreters on the interpreter roles in the police investigative interviews and the conflicts in such roles. In a recently published study, Howes (2019) investigated interpreters’ perceptions of their roles in police interviews. Twenty community interpreters with experiences in police interview interpreting were interviewed in the study. Howes depended on the thematic analysis of interview transcripts. In a study with a different methodological orientation, Kredens (2016) explored the views of police officers and practising interpreters on some ethics-related interpreting situations. His study was conducted in the legal system of England and Wales. Twenty-three interpreters and 22 police officers took part in the study. Based on his interpreting work experience in various public service settings, Kredens created six scenarios which involved a potentially difficult ethical issue. The interpreter and police officer participants’ task was to reflect upon these scenarios and what the interpreter has to do in them. Below are two examples of these scenarios (Scenarios 1 and 5): Scenario 1 A man suspected of murder is being interviewed by the police. He denies any involvement in the crime. The interviewing police officer leaves the room for two minutes. The man becomes agitated and tells you, “Look, it was an accident. I only wanted to scare her. I’m not guilty”. The officer comes back with his coffee. What do you do? (p. 70) Scenario 5 At a police station in an Eastern European country a young man on a stag-night trip from England is being interviewed following a street brawl which he had apparently initiated. A police officer tells him that he faces a prison sentence, but adds that ‘there’s another way of dealing with this situation’ and leaves the room for a short time. You are aware that the young man has just been invited to officer a bribe, but he has no idea this is the case. What do you do? (p. 72)

Other police interpreting studies have depended on recorded observational data to examine the discourse features in interpreted parts. The two studies reported by Nakane (2007, 2009) represent this trend. Relying on recorded police interview data from two drug trafficking cases in Melbourne, Nakane (2007) investigated interpreters’ process problems in communicating suspects’ rights. In analyzing her data, Nakane focused on three main discourse aspects: the ‘problematic turn construction; the treatment of a follow-up comprehension check question; and the interference of interpreters’ understanding of the rights of suspects’ (p. 87). In a later study, Nakane (2009) examined the interpreter use of repairs in interactional police interviews. Conversational repairs are defined by Wong (2000) as the communicative ‘efforts to deal with any problems in speaking, hearing or understanding of the talk,’ and they include ‘confirmation checks, clarification requests, restatements, repetitions,

7.5 Profiling Interpreter Practices and Roles

141

understanding checks’ (p. 247). To examine these oral discourse features, Nakane analyzed three recorded interpreter-mediated interviews from a drug-smuggling case in one of Melbourne police stations.

7.5.5 Interpreting in War-Related Conditions Compared to the above interpreting fields, scarce research has explored the interpreting practices in war-related conditions. This seems to have mainly been caused by the inaccessibility to the interpreters working in these conditions. One of these scarce studies was reported by Rosendo and Muñoz (2017), who investigated the practices of locally recruited interpreters in war-related scenarios in the Middle East. They identified four types of interpreters working in such war-related conditions; these are: military language specialists, local interpreters recruited by the military, UN language assistants, and staff or freelance conference interpreters. In profiling the roles of the first three types of interpreters, Rosendo and Muñoz depended on the conceptual narrative approach which is defined as the ‘stories and explanations that scholars in any field elaborate for themselves and others about their object of enquiry’ (Baker 2006, 39). As for the fourth type of interpreters (i.e., staff or freelance conference interpreters), they profile their roles drawing on a questionnaire that was completed by eight international organization-affiliated staff and freelance interpreters who had interpreting experiences in conflict zones in the Middle East. The 32 questions in the questionnaire are divided into four sections about background information, the interpreters’ working, training, and protection and work acceptance. Below are some interesting insights Rosendo and Muñoz summarized based on the interpreters’ responses to their questionnaire: [M]ost of the interpreters… believe that when working in a conflict they are both cultural and linguistic mediators. As such, they consider that interpreters must have specific knowledge of the cultures of the languages they are interpreting…. They also underscore that knowledge of the cultural context prevents errors from arising due to ignorance of the local culture and traditions; that being able to interpret body language and other non-verbal communicative gestures can resolve misunderstandings in a tense situation; that in conflict situations interpreters must give greater priority to accuracy…. they consider that not just any interpreter is able to work in conflicts in the first place. When asked why, they reply that interpreters in conflicts must be people who: can control their emotions and remain calm and discreet at all times; are able to manage personal frustration and disappointment; have experience of interpreting and a very acute awareness of the realities of the conflict; are able to establish limits in order to avoid merely being used as an instrument by one of the parties; are strong and healthy; master their working languages perfectly and are well prepared for specific situations; are sensitive to cultural traditions and religious protocol; and are firm with regard to working conditions and team safety. Likewise, psychological fortitude is highlighted, as interpreting in a conflict can be a traumatic and stressful experience. (p. 193)

In another study on interpreting in war-related scenarios in the Darfur region in Western Sudan, Ali et al. (2019) looked at the challenges encountered by interpreters working for UN peace-keeping missions. They specifically looked at the

142

7 Researching Professional Translator/Interpreter Experiences …

linguistic, socio–cultural, and communication barriers interpreters encounter and their coping strategies. To explore these issues, their study depended on conducting semi-structured interviews with 20 interpreters.

7.5.6 Telephone Interpreting Another interpreting field receiving little research attention is telephone interpreting. The few studies investigating this field are of surveying nature. Wang (2017) surveyed telephone interpreting practices and perceptions in Australia. She used a questionnaire that was completed by 465 interpreters across Australia. Her 29-item questionnaire explored interpreters’ perceptions of telephone interpreting use, views on its necessity, accuracy, remuneration, and their suggestions for improving its quality. She organized her questionnaire data according to the following themes: telephone interpreting experience, working on a casual basis for multiple employers, interpreting hours, the amount of telephone interpreting work, topics of telephone interpreting assignments, and situations and clients for which telephone interpreting is inappropriate. Wang’s study revealed important insights about telephone interpreting practices. For example, the respondents to the questionnaire reported that telephone interpreting is unlikely to be effective in the following situations: 1. Conversations with high emotional content … 2. Conversations about life or death …3. Legal settings such as courts, tribunals, detention centres, police interviews, and immigration interviews centre …4. Medical settings, especially mental health consultations …5. Other highly complex matters such as interviews …6. Scenarios requiring the interpreter’s sight translation of documents such as medical consent forms, court orders and immigration letters …7. Lengthy sessions such as court trials …8. Situations with poor audibility or inappropriate equipment … 9. Communication involving more than two clients (e.g. tribunal hearings, group meetings) where it is difficult to manage turn-taking … 10. Communication involving substantial visual information. (p. 107)

Fernández and Ouellet (2018) also used a questionnaire to identify the differences and similarities in telephone interpreting practices in Sweden and Spain. Their questionnaire was completed by 34 telephone interpreters working for two interpreting service providers in the two countries. Based on the interpreters’ responses to the background part in the questionnaire, Fernández and Ouellet classified them into three types: seasoned telephone interpreters (those who had more than 5000 h of work experience); advanced telephone interpreters (those who had between 3000 and 5000 h of experience); and novice telephone interpreters (those who had less than 3000 h of work experience). The two researchers organized the interpreters’ responses to the main questions in their survey based on this classification. These main questions were developed to elicit information about the respondents’ perceptions of the most and least problematic telephone interpreting settings, and their qualitative narratives about their experiences of difficulties and the strategies they used to cope with such difficulties. Below are three exemplary narrative questions from Fernández and Ouellet’s 9-item questionnaire (questions 1, 2, and 3):

7.5 Profiling Interpreter Practices and Roles

143

1. Regarding the services mentioned in the previous question (health, health emergency, other emergencies, social services and care, police, domestic violence, insurance companies, local councils, tourism, other-specify-): – Which of these is more challenging, and more difficult for interpreting on the phone? (you can mention more than one); – Could you explain why? – Which factors make interpreting the interaction more difficult in that service? 2. Regarding your personal experience with telephone interpreting: – Could you share with us your worst work experience, when telephone interpreting was at its most difficult? (You can mention more than one experience); – Could you explain what made interpreting so difficult? 3. Regarding your personal experience with telephone interpreting: – Could you share with us your most rewarding telephone interpreting experience, when interpreting was at its smoothest? (You can mention more than one experience of smooth interaction); – Could you explain what made telephone interpreting smoother and easier? (p. 44).

7.5.7 Sign Language Interpreting With regard to the professional practices of sign language interpreters, these have not received adequate research attention either. As may be expected, the available relevant studies on this field are of surveying nature. The lack of discourse analysis studies on this type of interpreting seems to have resulted from the difficulty of analyzing the sign language discourse features. An international survey study was reported by Napier et al. (2017) on the difficulties sign language interpreters experience when working remotely via video link. The respondent interpreters reported that the improvements needed in these services should deal with ‘ineffective video interpreting policies, poor public awareness and lack of training’ (p. 1). Mendoza (2012) surveyed sign language interpreter perceptions of the ethical issues in their profession. According to Mendoza: Many signed language interpreter organizations have ethical codes that their members must follow. The World Association of Sign Language Interpreters (2008) lists several signed language interpreters’ ethical codes. Finnish, Australian, Kenyan, Irish, Canadian, and Philippine sign language interpreters’ codes of ethics all include themes of confidentiality, business practices, appropriate compensation, interpreting accuracy, respect for consumers, discretion in accepting jobs, and impartiality. (p. 60)

Mendoza’s study explored how expert and novice sign language interpreters in the USA make ethical decisions in work situations and the type(s) of knowledge they use

144

7 Researching Professional Translator/Interpreter Experiences …

for making them. She collected her data using a questionnaire and interviews along with analyzing the documents used in sign language interpreting in the USA. The questionnaire was completed by 393 participants (225 novice interpreters and 168 expert ones). The questionnaire covered the following ethical areas: confidentiality, impartiality (i.e., neutrality), professional conduct (i.e., having the required skills and discretion when performing work tasks), and commitment to the guidelines of business practices. As for the interviews, these were conducted with three novice and three expert interpreters, and they addressed the interpreters’ perceptions, explanations and practices, and the processes they used in ethical situations. The interviews were guided by nine questions. Below are five of them: 1. Describe a recent interpreting situation where you felt you had to make a decision that involved ethical issues related to confidentiality, impartiality, professionalism, and/or business practices. 2. What triggered the acknowledgment that this was an ethical dilemma? 3. What made the situation ethically challenging? 4. How did you feel about this ethical issue? 5. Please describe the process you went through in resolving the dilemma. (p. 72) Research on sign language interpreting varies from one educational stage to another. Unlike the case of pre-university stages, not much research has been reported about sign language interpreting at higher education institutions. Powell (2013) tried to fill in this contextual research gap by investigating the experiences of sign language interpreters in the context of post-secondary education in New Zealand. She used the case study method by focusing on two interpreters and collecting data from them over four weeks using a mixed-research method. Her data sources included: reviewing the sign language interpreting documents of New Zealand; a questionnaire about the interpreter demographic data and their perceptions of important career and work relationship issues; direct observations of the two interpreters’ techniques and strategies while working in a team situation; and in-depth interviews with them about interpreting in the post-secondary education context, and the issues noted in the related documents reviewed, and their questionnaire responses and observations. The themes that emerged from Powell’s study data are related to the following main issues: ‘(1) the uniqueness of post-secondary level educational interpreting, (2) the value of reflection on practice, (3) the strength of commitment to sign language interpreting, (4) the nature of sign language interpreters’ professional identities and, (5) the usefulness of professional development’ (p. 301). The study also revealed some work concerns of sign language interpreting in New Zealand, including: the interpreter ability to cope with the speed of information transfer, the status of the profession, the lack of supervision and planning policies, and working in less than ideal circumstances in some places.

7.6 Conclusion

145

7.6 Conclusion As has been noted above, professional translator/interpreter experiences and roles research has explored important issues related to technology use in translation and interpreting, the correlates of translator/interpreter competence, and the practices and roles played by translators and interpreters in various workplaces. Table (7.1) gives a summary of these research areas and the issues investigated in each area. Overall, it is noted that more research has been reported on interpreter practices and roles than on translator ones. What is important about the professional translator/interpreter experience and role research conducted so far is that it could stimulate other research topics which remain to be explored in future studies. In other words, such research has contributed to building a part of the professional translator/interpreter experience profiles, and thus more relevant studies are needed to complete such profiles. This is particularly clear in the studies researching translator/interpreter technology use and profiling interpreter practices and roles. Meanwhile, more research efforts are needed in investigating the correlates of translating/interpreting competence, and profiling professional translator practices. These two areas have not received adequate attention yet.

Table 7.1 Overview of the research areas and issues in professional translator/interpreter experiences and roles studies Research areas

Main issues researched so far

Translator/interpreter use of technology

• Translator use of CAT tools • Translator use of machine translation software • Translator use of resourcing tools and platforms • Interpreters’ use of technology

Correlates of translator/interpreter competence • Sociodemographic characteristics • Psychological dimensions • Job satisfaction Profiling translator practices and roles

• Translation practices in a particular country/region • Translation practices in a particular field

Profiling interpreter practices and roles

• • • • • • •

Conference interpreting Healthcare interpreting Court interpreting Police investigative interview interpreting War-related condition interpreting Telephone interpreting Sign language interpreting

146

7 Researching Professional Translator/Interpreter Experiences …

Future studies surveying translation and interpreting practices need to research these practices from both country-specific and cross-cultural angles. As noted above, country-specific surveys have been very limited (e.g., Kafi et al. 2018). As for the cross-cultural studies, there have been a few research attempts in the four areas reviewed. Therefore, there is a need for addressing these contextual and comparative research gaps. On the one hand, country-specific surveys could reveal important information about professional translator/interpreter workplace dynamics and training needs in different countries. On the other hand, more cross-cultural surveys could provide us with better comparative perspectives showing how translation and interpreting are practiced in various parts and regions of the world. Finally, the lack of research on audio-visual translation practices is clearly noted. Since this field is steadily growing, we need to know about the dynamics of creating the different types of audiovisual translated products, and the roles assigned to the team members taking part in this process. The insights gained from this suggested research can be of very important implications to training the students attending audio-visual translation courses or programmes.

References Ali, H., A. Alhassan, and I. Burma. 2019. An Investigation into the interpreters’ challenges in conflict zones: The cse of Darfur region in Sudan. Arab World English Journal 3 (3): 37–50. https://doi.org/10.24093/awejtls/vol3no3.3. Alonso, E. 2015. Analysing the use and perception of Wikipedia in the professional context of translation. The Journal of Specialised Translation 23: 89–116. Arumí Ribas, M. 2017. The fuzzy boundary between the roles of interpreter and mediator in the public services in Catalonia: Analysis of interviews and interpreter-mediated interactions in the health and educational context. Across Languages and Cultures 18 (2): 195–218. https://doi.org/ 10.1556/084.2017.18.2.2. Athorp, C., and B.T. Downing. 1996. Modes of Doctor–Patient communication: How interpreter roles influence discourse’. Paper presented at the 1996 Annual Conference of the American Association for Applied Linguistics, Chicago, March 1996. Atkinson, D.P. 2012. Freelance translator success and psychological skill: A study of translator competence with perspectives from work psychology. Doctoral dissertation, University of Auckland, Auckland. Baker, M. 2006. Translation and conflict: A narrative account. London: Routledge. Berk-Seligson, S. 2009. Coerced confessions: The discourse of bilingual police interrogations. Berlin: Walter de Gruyter. Bolden, C.B. 2000. Toward understanding practices of medical interpreting: Interpreters’ involvement in history taking. Discourse Studies 2 (4): 387–419. Bontempo, K., and J. Napier. 2011. Evaluating emotional stability as a predictor of interpreter competence and aptitude for interpreting. Interpreting 13 (1), 85–105. https://doi.org/10.1075/ intp.13.1.06bon. Braun, S. 2018. Video-mediated interpreting in legal settings in England: Interpreters’ perceptions in their sociopolitical context. Translation and Interpreting Studies 13 (3): 393–420. https://doi. org/10.1075/tis.00022.bra.

References

147

Bundgaard, K., and T.P. Christensen. 2019. Is the concordance feature the new black? A workplace study of translators interaction with translation resources while post-editing TM and MT matches. The Journal of Specialised Translation 31: 13–37. Chang, C., M.M. Wu, and T.G. Kuo. 2018. Conference interpreting and knowledge acquisition: How professional interpreters tackle unfamiliar topics. Interpreting 20 (2): 204–231. https://doi. org/10.1075/intp.00010.cha. Christensen, T.P., and A. Schjoldager. 2017. Computer-aided translation tools—The uptake and use by Danish translation service providers. The Journal of Specialised Translation 25: 89–105. Davidson, B. 2000. The interpreter as institutional gatekeeper: The social-linguistic role of the interpreters in the Spanish–English medical discourse. Journal of Sociolinguistics 4 (3): 379–405. Defrancq, B., and S. Verliefde. 2017. Interpreter-mediated “paternalistic” interaction in a judgecentered courtroom: A case study from a Belgian Correctional Court. Interpreting 19 (2): 209– 231. https://doi.org/10.1075/intp.19.2.03def. Diamond, L.C., M. Moreno, C. Soto, and R. Otero-Sabogal. 2012. Bilingual dual-role staff interpreters in the health care setting: Factors associated with passing a language competency test. International Journal of Interpreter Education 4 (1): 5–20. Ehrensberger-Dow, M. 2014. Challenges of translation process research at the workplace. MonTI 355–383. https://doi.org/10.6035/MonTI.2014.ne1.12. Fernández, E.E., and M. Ouellet. 2018. From the phone to the classroom: Categories of problems for telephone interpreting training. The Interpreters’ Newsletter 23: 19–44. Fulford, H., and J. Granell-Zafra. 2005. Translation and technology: A study of UK freelance translators. The Journal of Specialised Translation 4: 2–17. Gaspari, F., H. Almaghout, and S. Doherty. 2015. A survey of machine translation competences: Insights for translation technology educators and practitioners. Perspectives 23 (3): 333–358. https://doi.org/10.1080/0907676X.2014.979842. Gentile, P. 2018. Through women’s eyes. Conference Interpreters’ Self-Perceived Status in a Gendered Perspective. Hermes—Journal of Language and Communication in Business 58, 19–42. Han, C. 2016. A survey to profile conference interpreting practice in China. Interpreting 18 (2): 259–272. https://doi.org/10.1075/intp.18.2.05han. Heinisch, B., and K. Iacono. 2019. Attitudes of professional translators and translation students towards order management and translator platforms. The Journal of Specialised Translation 32: 61–89. Howes, L.M. 2019. Community interpreters’ experiences of police investigative interviews: How might interpreters’ insights contribute to enhanced procedural justice? Policing and Society 29 (8): 887–905. https://doi.org/10.1080/10439463.2018.1447572. Jiang, H. 2013. The interpreter’s glossary in simultaneous interpreting: A survey. Interpreting 15 (1): 74–93. https://doi.org/10.1075/intp.15.1.04jia. Kafi, M., M. Khoshsaligheh, and M.R. Hashemi. 2018. Translation profession in Iran: Current challenges and future prospects. The Translator 24 (1): 89–103. https://doi.org/10.1080/13556509. 2017.1297693. Kredens, K. 2016. Conflict or convergence? Interpreters’ and police officers’ perceptions of the role of the public service interpreter. Language and Law 3 (2): 65–77. Lagoudaki, E. 2006. Translation Memories Survey 2006. Translating and the Computer, vol. 28, 1–29. http://mt-archive.info/Aslib-2006-Lagoudaki.pdf. Leanza, Y. 2005. Roles of community interpreters in pediatrics as seen by interpreters, physicians and researchers. Interpreting 7 (2): 167–192. https://doi.org/10.1075/intp.7.2.03lea. Li, D. 2006. Translators as well as thinkers: Teaching of journalistic translation in Hong Kong. Meta: Translators’ Journal 51(3), 611–619. Marin-Lacarta, M., and M. Vargas-Urpi. 2018. Translators revising translators: a fruitful alliance. Perspectives: Studies in Translation Theory and Practice 27 (3): 404–418. https://doi.org/10. 1080/0907676X.2018.1533569. Martikainen, L., P. Karkkola, and M. Kuittinen. 2018. Encountering change: Job satisfaction of sign language interpreters in Finland. International Journal of Interpreter Education 10 (2): 43–57.

148

7 Researching Professional Translator/Interpreter Experiences …

Mellinger, C.D., and T.A. Hanson. 2018. Interpreter traits and the relationship with technology and visibility. Translation and Interpreting Studies 13 (3): 366–392. https://doi.org/10.1075/tis.000 21.mel. Mendoza, E. 2012. Thinking through ethics: The processes of ethical decision making by novice and expert American sign language interpreters. International Journal of Interpreter Education 4 (1): 58–72. Moorkens, J., A. Toral, S. Castilho, and A. Way. 2018. Translators’ perceptions of literary postediting using statistical and neural machine translation. Translation Spaces 7 (2): 240–262. Mulayim, S., M. Lai, and C. Norma. 2015. Police investigative interviews and interpreting: Context, challenges, and strategies. Boca Raton, FL: CRC Press. Muñoz Miquel, A. 2018. Differences between linguists and subject-matter experts in the medical translation practice: An empirical descriptive study with professional translators. Target: International Journal of Translation Studies 31(1), 24–52. https://doi.org/10.1075/target.141 30.mun. Nakane, I. 2007. Problems in communicating the suspect’s rights in interpreted police interviews. Applied Linguistics 28 (1): 87–112. https://doi.org/10.1093/applin/aml050. Nakane, I. 2009. The myth of an ‘invisible mediator’: An australian case study of English–Japanese police interpreting. PORTAL Journal of Multidisciplinary International Studies 6 (1): 1–16. https://doi.org/10.5130/portal.v6i1.825. Nakane, I. 2014. Interpreter-mediated Police interviews: A discourse-pragmatic approach. London: Palgrave Macmillan. Napier, J., R. Skinner, and G.H. Turner. 2017. “It’s good for them but not so for me”: Inside the sign language interpreting call centre. Translation & Interpreting 9 (2): 1–23. Ng, E. 2016. Interpreter intervention and participant roles in witness examination. International Journal of Interpreter Education 8 (1): 23–39. Nisbeth Brøgger, M. 2017. When translation competence is not enough: A focus group study of medical translators. Meta 62 (2), 396–414. https://doi.org/10.7202/1041030ar. Ortega Herráez, J.M., and A.I. Foulquié Rubio. 2008. Interpreting in police settings in Spain: Service providers’ and interpreters’ perspectives. In Crossing borders in community interpreting: Definitions and dilemmas, ed. Valero C. Garcés and A. Martin, 123–146. Amsterdam: John Benjamins. Ortega Herráez, J.M., M.I. Abril, and A. Martin. 2009. A comparative study of interpreters’ self perception of role in different settings. In Quality in interpreting: A shared responsibility, ed. S. Hale, U. Ozolins, and L. Stern, 149–167. Amsterdam: John Benjamins. Penn, C., and J. Watermeyer. 2014. Features of cultural brokerage in interpreted child psychiatry interactions: A case of paradoxical practice. The Interpreter and Translator Trainer 8 (3): 354– 373. https://doi.org/10.1080/1750399X.2014.968994. Pöllabauer, S. 2004. Interpreting in asylum hearings: Issues of role, responsibility and power. Interpreting 6 (2): 143–180. https://doi.org/10.1075/intp.6.2.03pol. Powell, D. 2013. A case study of two sign language interpreters working in post-secondary education in New Zealand. International Journal of Teaching and Learning in Higher Education 25 (3): 297–304. Rosendo, L.R., and M.B. Muñoz. 2017. Towards a typology of interpreters in war-related scenarios in the Middle East. Translation Spaces 6 (2): 182–208. https://doi.org/10.1075/ts.6.2.01rui. Rossi, C., and J. Chevrot. 2019. Uses and perceptions of machine translation at the European Commission. The Journal of Specialised Translation 31: 177–200. Sakamoto, A. 2017. Professional translators’ theorising patterns in comparison with classroom discourse on translation: The case of Japanese/English translatorsin the UK. Meta 62 (2): 271–288. https://doi.org/10.7202/1041024ar. Sakamoto, A., and M. Foedisch. 2017. “No news is good news?”: The role of feedback in the virtual-team-style translation production network. Translation Spaces 6 (2): 333–352. https:// doi.org/10.1075/ts.6.2.08sak.

References

149

Schwenke, T., J. Ashby, and P. Gnilka. 2014. Sign language interpreters and burnout: The effects of perfectionism, perceived stress, and coping resources. Interpreting 16 (2): 209–232. https://doi. org/10.1075/intp.16.2.04sch. Seeber, K.G., L. Keller, R. Amos, and S. Hengl. 2019. Expectations vs. experience: Attitudes towards video remote conference interpreting. Interpreting 21(2), 270–304. https://doi.org/10. 1075/intp.00030.see. Swabeyi, L., T. Agan, C. Moreland, and A.M. Olson. 2016. Understanding the work of designated healthcare interpreters. International Journal of Interpreter Education 8(1), 40–56. Vandal-Sirois, H. 2016. Advertising translators as agents of multicultural marketing: A case-studybased approach. Perspectives: Studies in Translation Theory and Practice 24(4), 543–556. https:// doi.org/10.1080/0907676x.2015.1119863. Vargas-Urpi, M. 2016. Problems and strategies in public service interpreting as perceived by a sample of Chinese–Catalan/Spanish interpreters. Perspectives 24 (4): 666–678. https://doi.org/ 10.1080/0907676X.2015.1069861. Vargas-Urpi, M. 2019. Sight translation in public service interpreting: A dyadic or triadic exchange? The Interpreter and Translator Trainer 13 (1): 1–17. https://doi.org/10.1080/1750399X.2018.150 3834. Vranjes, J., H. Bot, K. Feyaerts, and G. Brône. 2019. Affiliation in interpreter-mediated therapeutic talk: On the relationship between gaze and head nods. Interpreting 21 (2): 220–244. https://doi. org/10.1075/intp.00028.vra. Wang, J. 2017. Telephone interpreting should be used only as a last resort.’ Interpreters’ perceptions of the suitability, remuneration and quality of telephone interpreting. Perspectives: Studies in Translation Theory and Practice 26(1), 100–116. https://doi.org/10.1080/0907676X.2017.132 1025. Warnicke, C., and C. Plejert. 2016. The positioning and bimodal mediation of the interpreter in a Video Relay Interpreting (VRI) service setting. Interpreting 18 (2): 198–230. https://doi.org/10. 1075/intp.18.2.03war. Wilson-Stronks, A., and E. Galvez. 2007. Hospitals, language, and culture: A snapshot of the nation. Oakbrook Terrace, IL: The Joint Commission. Wong, J. 2000. Delayed next turn repair initiation in native/non-native speaker English conversation. Applied Linguistics 21 (2): 244–267. World Association of Sign Language Interpreters’ code of ethics. (2008). www.wasli.org/Codeof Ethics.htm. Zhan, C., and L. Zeng. 2017. Chinese medical interpreters’ visibility through text ownership: An empirical study on interpreted dialogues at a hospital in Guangzhou. Interpreting 19 (1): 97–117. https://doi.org/10.1075/intp.19.1.05zha.

Chapter 8

Advancing Translator and Interpreter Education Research

Abstract In this final chapter, the author summarizes the main subareas of the six translator and interpreter education research areas discussed in the book. The author also diagnoses the current status of translator and interpreter education research. Following this, the author provides the following three main suggestions for advancing translator and interpreter education research: addressing thematic and contextual research gaps, developing research practices and adopting methodological borrowing, and establishing specialized research journals and centers. The issues related to these three suggestions are discussed. Keywords Translation research · Interpreting research · Translator and interpreter education · Translator and interpreter education research · Translator training research · Methodological borrowing

8.1 The Current Status of Translator and Interpreter Education Research In this book, the six main types of translator and interpreter education research have been discussed. In the previous six chapters, the author has provided exemplary studies representing each of these research types. This was accompanied by referring to the methodological features in the studies highlighted and, in some cases, showing how the data was collected. Table 8.1 provides a summary of all the translator and interpreter education research areas and subareas discussed in this book. From the review given in the book, we now have a clear picture of the progress made in translator and interpreter education research and its areas. As has been seen, translator and interpreter education research has grown in the past 15 years in particular. Such growth has definitely resulted from the increasing numbers of published studies and books addressing these research areas. As noted, the progress made so far in translator and interpreter education research varies from area to another. For example, the translation/interpreting product area has not received considerable attention yet. The same applies to some research subareas in translation/interpreting assessment (e.g., test validation, and performance rubric and psychological scale development) and learning and teaching practices (e.g., © Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0_8

151

152

8 Advancing Translator and Interpreter Education Research

Table 8.1 Overview of translator and interpreter education research areas and subareas Main research areas

The subareas researched so far

Translator/interpreter training experimentation • • • • • • • •

Technology-based training Process-based training Corpus-based training Profession-oriented training Project-based learning training Research-oriented training Miscellaneous training types Prescriptive training

Translation/interpreting learning and teaching practices

• Country-specific translator and interpreter education policy • Translation and interpreting programme evaluation • Translation and interpreting trainees’ needs analysis • Translation and interpreting trainee performance variables • Translation and interpreting classroom practices • Translation and interpreting trainer education

Translation/interpreting assessment

• Country-specific translation and interpreting assessment practices • Translation/interpreting test validation • Source text difficulty level • Performance rubric development • Rating practices and testing conditions • Translation/interpreting motivational scale development • User evaluation/reception

Translation and interpreting processes

• Translation process (whole process, using resources, revision and monitoring) • Interpreting process (profiling strategies and researching a particular strategy type)

Translation and interpreting products

• Translation and interpreting quality • Linguistic and pragmatic features • Prosodic features in interpreting

Professional translator/interpreter experiences • Translator/interpreter use of technology and roles • Correlates of translator/interpreter competence • Translator practices and roles • Interpreter practices and roles

trainer education and classroom practices). The translation process and professional translator/interpreter practice areas are no exception either. Overall, the current status of translator and interpreter education research with its six areas can be best described as ‘developing’ or ‘maturing’. Many research gaps remain to be explored. Meanwhile, methodological shortcomings are noted in much

8.1 The Current Status of Translator and Interpreter Education Research

153

of the relevant research conducted so far. Thus, there is a need for improving the research practices in the target field. In the next three sections, some suggestions for advancing translator and interpreter education research are provided.

8.2 The Need for Addressing Research Gaps It is noted in the previous chapters that many gaps are yet to be addressed in the majority of the subareas within the main translator and interpreter education research areas (see Table 8.1). As mentioned in the conclusion sections of some chapters, a key step in filling in these research gaps is to borrow from other fields. For example, researchers interested in translation/interpreting assessment can experiment the more developed research ideas found in language testing research which is a broader field. Likewise, researchers interested in translation/interpreting learning and teaching practices could find a wider range of mature research topics in the language education, TESOL, and applied linguistics fields. There are also some language- and genre-specific gaps that need to be addressed in future research. Specifically, some languages and genres have received little attention in the translator and interpreter education research published in English. With regard to the languages, it can be noted that most published research has covered Chinese, English, and Western European languages. Less-researched languages in these published studies include Arabic, Indian, Russian, and the languages used in the African continent. Besides, the less-researched genres in the published research include literary and scientific translation/interpreting. As noted, legal translation/interpreting and medical interpreting have received much more attention compared to the other genres. Addressing the noted gaps also requires balancing translation versus interpreting studies in translator and interpreter education research and subareas. It is noted, for instance, that interpreting has been more researched in the assessment and professional experience and role areas, whereas translation has received more attention in the process and learning/teaching practice areas. Thus, due attention should be given to addressing these field-related gaps.

8.3 The Need for Using More Rigorous Research Designs In many of the translator and interpreter education research areas and subareas reviewed in this book, some methodological shortcomings have been noted at the data collection and analysis levels. In a large number of training experimentation studies, for instance, trainees’ performance was assessed in a rather non-standardized way. The research attempts made in some learning/teaching practice, assessment, and professional experience subareas also lack standardization. Therefore, there is a need for using more rigorous designs in future related research.

154

8 Advancing Translator and Interpreter Education Research

Once again, it is viewed that this can be accomplished through methodological borrowing from related fields. In the translation and interpreting process research area, there is a noted case of methodological borrowing from the writing process and oral communication research, respectively. With such borrowing, no methodological gaps seem to exist between these pairs of close fields (translation process versus writing process research, and interpreting process versus oral communication research). The outcome of this borrowing is that, methodologically speaking, the translation and interpreting process area seems to be much more developed and mature than the other translator and interpreter education research areas. Accordingly, similar methodological borrowing attempts need to be made in other research areas.

8.4 The Need for Establishing Specialized Research Journals and Centers Advancing translator and interpreter education research also requires establishing research journals and centers specialized in some areas. For example, a newly established journal such as Translation, Cognition & Behavior could contribute significantly to making further developments in translation and interpreting process research. Research in the other areas could also be fostered by launching some specialized journals. These journals may, for instance, cover one of the following scopes: translation/interpreting assessment, translation/interpreting classroom practices, community interpreting, or professional translator practices. Likewise, translator and interpreter education research will benefit greatly from establishing centers specialized in these areas. Such specialized research journals and centers will definitely bring about important developments in translator and interpreter education research, and help in disseminating its more mature culture.

Glossary of Translator and Interpreter Education Research

Back-translation research instrument validation A method commonly used for validating translated research instruments. This method involves the following three steps: translating the target research instrument from a source text, the reverse translation of the translated research instrument into the source language, and comparing the reverse translated text with the source one to make sure they are not different. Bidirectional translators Those who can translate from their L1 into L2 and from L2 into L1. Classroom practices research The studies dealing with what is actually taking place in translation and interpreting classes. Cognitive translatology The study of the cognitive processes involved in translating the text from the source language to the target one. Computer-keystroke logging A data source used in translation research. Using computer-keystroke logging involves observing and analyzing the online translation processes through recording computer screen activities. Data triangulation Using more than one data source to study the same variable(s). Discourse-based interpreting process modeling Using the discourse analysis approach in analyzing interpreting process data so as to model the strategies used by the interpreter in particular situations or fields. Employability needs analysis The needs analysis type aiming at assessing how the translator/interpreter education programme can well prepare students for the labor market. It is normally based on employers and professionals’ views. Eye-tracking A procedure for recording research participants’ eye-movements and fixations so as to get insights into their cognitive processes when looking at a particular content. Interpreting accreditation tests The tests assessing candidates’ interpreting competence and their ability to join the labor market as interpreters. These tests are normally conducted by official organizations. Interpreting strategies The processes and actions interpreters use to communicate the source text message effectively or to cope with a problem while performing interpreting tasks. © Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0

155

156

Glossary of Translator and Interpreter Education Research

Learner performance variables research The studies concerned with exploring the predicators or correlates of translation and interpreting students’ performance. Needs analysis The process of identifying translation and interpreting students’ needs so as to consider these needs while developing their training programme. Prosodic features in interpreter performance Prosodic features include pauses, intonation, speech rate and segmentation, accentuation and stress, and fundamental frequency. Public service (community) interpreting A term describes the interpreting work done in hospitals, courtrooms, government offices, and educational institutions. Retrospective interviews When using retrospective interviews in translation or interpreting process studies, researchers ask participants about the cognitive strategies used in the task they have performed. To stimulate participants’ retrospective accounts about the task, the researcher normally uses a type of performance data (e.g., the text translated, audio-recorded interpreting data, or computer screen recording). Retrospective translation process questionnaires A questionnaire type which includes a number of items tapping the strategies used in the different stages of performing the translation task. Source text difficulty research The studies assessing the difficulty or complexity level of the text to be translated or interpreted. Text ownership This term describes a case in which an interpreter adds personal ideas or knowledge in a turn. This discourse feature is regarded as an aspect of interpreter visibility. Trainer education research The studies addressing translation and interpreting trainers’ preparation, needs, and pedagogical beliefs and practices. Translation/interpreting aptitude/admission tests The tests assessing or predicting applicants’ ability to complete a particular translator/interpreter education programme. Normally, passing this type of tests is conditional to admitting the applicant to the target translator/interpreter education programme. Translation/interpreting assessment research The studies addressing the translation and interpreting assessment issues, including: translation and interpreting test/scale development and validation, quality assessment and user expectations and evaluation, inter-rater reliability, source text difficulty, and trainer assessment literacy. Translation for language learning training A translation training type which aims at fostering students’ language learning or development. Translation/interpreting assessment rubrics The rubrics developed for rating translator or interpreter performance. These rubrics depend on scoring guides, and they can be either holistic or analytic. Translation/interpreting learning and teaching practices research The research evaluating translation and interpreting learning and teaching practices. The evaluated issues covered in this research type are not limited to the curriculum or training programme delivery but they also include other dimensions such as learning or teaching experiences.

Glossary of Translator and Interpreter Education Research

157

Translation/interpreting motivational scales The psychological scales developed for assessing translator or interpreter motivational beliefs and perceptions. The translation/interpreting motivational scales developed so far are only concerned with translator or interpreter self-ability beliefs (i.e., their self-efficacy and selfconcept). Translation/translator platforms The websites and web-based systems used by translators to facilitate or customize their workflow (translation platforms), and to enable them to network with each other, follow the latest field updates or reach work clients (translator platforms). Translation/interpreting process research The studies dealing with the mental processes and cognitive problems involved in translation and interpreting. Translation/interpreting product research The studies profiling and analyzing the linguistic and discoursal features and/or errors in the texts rendered from one language into another. Translation/interpreting quality research The research concerned with identifying the accuracy of translated or interpreted texts and the potential errors in them. Translation/interpreting professional experiences research The studies addressing the issues related to professional translator and interpreter experiences and perceptions. The issues include translator and interpreter job roles and tasks, work habits, and what facilitates or hinders their work. Translation/interpreting test validation The process of developing a particular translation/interpreting test and ensuring it assesses what it claims to measure. Translation/interpreting training effectiveness or experimentation research The studies experimenting or prescribing particular pedagogical techniques or syllabi. Translator/interpreter action research training studies The studies in which the teacher systematically observes and assesses the impact of the translation or interpreting training during several stages of the research. Translator/interpreter corpus-based training The training aiming at exposing trainees to a particular type of texts and getting them to observe and discuss particular features in them as an approach to improving their translation or interpreting performance. Translator/interpreter education policy research The studies concerned with describing translator and interpreter education policies and the status quo of training policies in a particular country or context. Translator/interpreter education programme evaluation research The studies evaluating translator and interpreter education programmes. They can focus on evaluating a number of programmes, one programme, or a particular dimension in the programme. Translator/interpreter education research The research relevant to the process of understanding the translation/interpreting trainees’ and practitioners’ performance, difficulties, needs, and experiences.

158

Glossary of Translator and Interpreter Education Research

Translator/interpreter experiential situated learning training The training type that involves trainees in direct or real practical experiences in translation and interpreting workplaces. Translator/interpreter pre-post-training assessment studies The translation and interpreting studies testing trainees’ performance before and after the instructional experiment. Translator/interpreter prescriptive training A research-driven framework or model proposed for training translators or interpreters. In other words, it aims at telling trainers how to train translators and interpreters. Translator/interpreter process-based training The training studies focusing primarily on developing translator/interpreter performance through raising their awareness of effective translation/interpreting processes or improving some aspects in such processes. Translator/interpreter post-training assessment studies The studies translation and interpreting studies relying on assessing trainees’ performance after the training only. Translator/interpreter profession-oriented training The training that aims at raising trainees’ awareness of their future career conditions and requirements. Translator/interpreter professional awareness-raising training The training type aiming at empowering translation and interpreting trainees and helping them to be aware of and experience their future workplace requirements. Translator/interpreter technology-based training The training that aims at helping trainees make use of a technological environment or digital tool to improve their translation or interpreting competences or complete related tasks efficiently. Translator/interpreter simulated situated learning training The training involving trainees in simulated translation and interpreting work experiences. Translator/interpreter project-based learning training A structural approach to translation and interpreting students’ training. It is implemented over a long time and depends on engaging students in pursuing solutions to problems and communicating and reporting findings to others. Translator/interpreter research-oriented training A training type aiming at developing translation and interpreting trainees’ research skills and knowledge. Translator/interpreter technology use research The studies concerned with investigating translator/interpreter use of computer-assisted translation (CAT) and other technological tools and applications, and their perceptions of working in technology-mediated environments. Unidirectional translators Those who can translate from their L2 into L1 only. User evaluation/reception research The studies investigating user perceptions of the translation/interpreting services. These studies address issues such as user evaluation of translator/interpreter performance services, or their preferences, expectations, and cognitive processing of translated and interpreted products.

Author Index

A Abdel Latif, Muhammad M. M., 1, 2, 5, 13, 39, 61, 73, 85, 87, 88, 111, 125, 151 Abuín González, 101 Ahrens, 97, 119, 120 Angelelli, 9, 63, 65, 66, 68, 70 Arhire, 117 Arumí Ribas, 87, 99, 100, 133, 136 Atkinson, 130

B Bachman, 63 Baker, 1, 141 Barik, 7, 69, 86, 112, 120 Bartłomiejczyk, 28, 75, 98 Berk-Seligson, 113, 139 Bernardini, 87 Bowker, 29, 76 Brindley, 46 Brislin, 7, 66 Bundgaard, 92, 94, 127 Burn, 14, 113

Davidson, 49, 134 Defrancq, 102, 118, 119, 139 de Lima Fonseca, 95 Dong, 19, 88, 97, 100, 101, 117, 118 Dragsted, 90, 91 E Ehrensberger-Dow, 89, 125, 126 Englund Dimitrova, 88 F Fernández, 18, 76, 92, 142 Ferreira, 8, 96 G Galán-Mañas, 14–16, 23, 25, 30 Gerver, 7 Gile, 2, 3, 7, 74, 97, 98, 101, 115

C Campbell, 67, 116 Chen, 16, 63, 101–103 Chesterman, 3 Christensen, 74, 92, 94, 127 Christoffels, 97 Colina, 66, 69, 71 Crezee, 20, 21, 113

H Hale, 43, 44, 66, 67, 75, 114, 116 Hansen, 8, 16, 87 Haro-Soler, 14, 27, 50 Hatim, 14 Hirci, 92 Hlavac, 62–64 Holmes, 3, 4 Hubscher-Davidson, 49 Hurtado Albir, 15, 16, 30, 85, 90 Hvelplund, 88, 92, 93

D Dam, 101, 118

K Kalina, 87, 97, 98

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0

159

160 Kelly, 1, 2 Kiely, 42 Kiraly, 2, 14, 25, 27, 73 Ko, 14, 16 Kohn, 98 Kopczy´nski, 69, 112 Krawutschke, 2 Krings, 7, 86 Kruger, 96, 118

L Lee, 17, 52, 70–73, 113 Li, 2, 8, 9, 19, 24–27, 45–48, 63, 77, 86, 97, 103, 131, 132

M Maddux, 14, 19 Martellini, 120 Mellinger, 96, 97, 105, 129 Moghaddas, 26, 47 Moser, 7, 97 Mulayim, 72, 139 Munby, 45

N Nakane, 139–141 Napier, 65, 130 Nord, 112

O Ortega Herráez, 134, 139, 140

P Pacte Group, 90, 91 Paneth, 7 Pinto, 47 Pöchhäcker, 2 Popoviˇc, 8 Pym, 1, 5, 27, 47

Author Index R Rossi, 128 Russo, 28, 74

S Sakamoto, 125, 129 Sales, 47 Sawyer, 2, 61, 64 Schaeffer, 93, 94, 96 Schmit, 8, 30 Schwieter, 8, 96 Shleshinger, 24

T Talaván, 29 Tebble, 30 Tennent, 2 Timarová, 50, 62, 64, 65 Tsagari, 2

V Van Besien, 103, 104 Vandepitte, 3, 27 Vargas-Urpi, 3, 133, 134

W Waddington, 69 Wang, 4, 9, 50, 94, 142 Wu, 44, 50, 54, 67, 135

Y Yan, 4, 5, 9, 40, 50

Z Zhang, 4, 25, 54, 71 Zheng, 77

Subject Index

A Accreditation tests, 63, 72, 155 Action research, 14, 15, 20, 27, 28, 52, 157 Anticipation, 85, 98, 100, 101, 104, 105 Aptitude/admission tests, 62, 64, 65, 79, 156 Assessment rubrics, 61, 62, 68, 78, 79, 156 Audiovisual translation, 16, 24, 29, 44 Authentic data, 126 Authenticity, 63

D Data sources, 6, 9, 13–15, 21, 24, 25, 28, 29, 31, 39, 40, 43, 45–48, 51, 53, 62, 74, 76, 86, 87, 93, 96, 101, 105, 126, 135, 144, 155 Data triangulation, 14, 40, 89, 105, 155 Delayed post-assessment, 15 Discourse-based interpreting process modelling, 155 Dubbing, 76

B Back-translation, 64, 66, 79, 155 Bidirectional translators, 91, 155

E Education policy research, 157 Employability needs analysis, 48, 55, 155 Errors, 5–7, 17, 18, 20, 22, 51, 69, 70, 79, 94, 102, 103, 105, 111–117, 120, 121, 141, 157 Experiences and roles, 1, 6, 125, 145, 152 Experiential situated learning, 23, 31, 158 Experimentation research, 5, 13, 14, 32, 157 Explicitation, 100, 101, 103, 105, 111, 118, 121 Eye-tracking, 6, 76, 77, 86–96, 104, 136, 137, 155

C CAT tools, 24, 53, 127, 128, 145 Classroom practices, 39, 40, 51–55, 152, 155 Cognitive translatology, 86, 105, 155 Community interpreting, 3, 41, 62, 63, 97, 134, 154 Computer-assisted translation, 17, 127, 158 Computer-keystroke logging, 86–88, 95, 155 Corpora, 6, 20, 22, 29, 31, 112, 117, 128 Corpus-based training, 13, 16, 20–22, 31, 152, 157 Court interpreting, 74, 114, 134, 138, 139, 145 Cross-cultural survey, 126, 146 Curriculum, 3–5, 8, 39–47, 49, 53, 54, 61, 156

F Feedback provision, 32, 39, 51, 52

H Healthcare interpreting, 4, 21, 134, 136, 137, 145 Historical developments, 1, 7, 42

© Springer Nature Singapore Pte Ltd. 2020 M. M. M. Abdel Latif, Translator and Interpreter Education Research, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-15-8550-0

161

162 I Interpreter process, 97, 99, 158 Interpreting in war-related conditions, 141 Interpreting processing, 101, 104 Interpreting quality, 21, 70–72, 75, 111–114, 117, 121, 152, 157 Interpreting strategies, 19, 20, 85, 87, 97– 102, 105, 155 Inter-rater reliability, 5, 70, 71, 79, 156

J Journals, 2, 4, 8, 9, 15, 19, 20, 26, 27, 32, 42, 72, 78, 151, 154

L Language and information resources, 91 Learner performance, 4, 5, 40, 49, 51, 156

M Machine-translated texts, 94, 95 Machine translation, 17, 29, 31, 52, 53, 55, 76, 79, 94, 95, 127, 128, 145 Macro approach, 40, 43, 46, 85, 90, 97 Micro approach, 44, 47, 55, 97 Mixed-method, 14, 19, 40, 43, 44, 49, 54, 135 Motivational scales, 61, 62, 73, 78, 79, 152, 157

N Needs analysis, 4, 39, 40, 45–49, 55, 152, 155, 156 Note-taking, 19, 31, 45, 85, 87, 88, 99–102, 105

O Omission, 7, 21, 22, 70, 103, 112–117, 134

P Pedagogical beliefs, 53–55, 156 Peer evaluation, 72 Personality variables, 49 Police investigative interview interpreting, 113, 139, 145 Post-training assessment studies, 14, 158 Pre-post training assessment, 14, 158 Prescriptive training, 29, 30, 152, 158

Subject Index Process-based training, 13, 16, 17, 30, 31, 152, 158 Professional awareness-raising training, 158 Professional experiences, 5, 6, 24, 44, 153, 157 Profession-oriented training, 5, 13, 16, 22– 24, 31, 152, 158 Programme evaluation, 8, 39, 40, 42, 43, 45, 55, 152, 157 Project-based learning training, 13, 16, 25, 31, 152, 158 Prosodic features, 20, 111, 112, 119–121, 152, 156 Public service interpreting, 134, 136

R Raters, 51, 70–72, 74, 79 Readability, 67, 68, 76, 79 Reflective essays, 50, 67, 86, 96, 97 Research gaps, 54, 99, 144, 146, 151–153 Research-oriented training, 13, 16, 26–28, 31, 152, 158 Retrospective interviews, 77, 86–89, 94, 100–102, 156 Retrospective questionnaire, 87, 92, 99

S Screen recording, 18, 87, 91–93, 156 Self-assessment, 15, 16, 28, 31, 52, 73, 79 Self-repair, 71, 85, 101–103, 105 Sign language interpreting, 50, 51, 53, 65, 66, 70, 74, 77, 79, 115, 129, 134, 143–145 Situated learning training, 23–25, 31, 158 Source text difficulty, 5, 66–68, 79, 111, 116, 121, 152, 156 Substitution, 22, 98, 103, 112, 116, 117 Subtitling, 25, 29, 76, 77, 79 Surveying, 40, 44, 48, 61, 62, 78, 126, 140, 142, 143, 146

T Teaching practices, 1, 5, 6, 8, 9, 28, 39, 40, 54, 55, 152, 153, 156 Technology-based training, 13, 16, 17, 31, 152, 158 Technology use research, 6, 158 Telephone interpreting, 114, 142, 143, 145 Test validation, 63, 64, 79, 151, 152, 157 Text ownership, 138, 156 Think-aloud method, 18, 86, 87

Subject Index Trainer education, 39, 40, 53–55, 152, 156 Translation for language learning, 29, 31, 156 Translation platforms, 128, 129, 157 Translation process problems, 95 Translation revision, 18, 19, 31, 85, 93, 105 Translator/interpreter competence, 125, 126, 130, 145, 152 Translator platforms, 128, 129, 157 Translator practices and roles, 131, 145, 152

163 Translator process, 87 Typology, 1, 3, 4, 137

U Unidirectional translators, 89, 91, 158 User evaluation, 61, 62, 74–76, 78, 79, 112, 152, 158 User reception, 79