The Second International Symposium on Signed Language Interpretation and Translation Research: Selected Papers 1944838511, 9781944838515

The Second International Symposium on Signed Language Interpretation and Translation Research was a rare opportunity for

551 71 2MB

English Pages 202 [203] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Second International Symposium on Signed Language Interpretation and Translation Research: Selected Papers
 1944838511, 9781944838515

Table of contents :
Contents
Preface • Danielle I. J. Hunt and Emily Shaw
1 Introducing Research to Sign Language Interpreter Students: From Horror to Passion? • Annemiek Hammer, Jan Nijen Twilhaar, and Beppie van den Bogaerde
2 Interpreting in Ghana • Elisa Maroney, Daniel Fobi, Brenda Puhlman, and Carolina Mmbro Buadee
3 The Role of French Deaf Translators, Case Study: The Paris Attacks, November 13, 2015 • Aurélia Nana Gassa Gonga
4 Use of Haptic Signals in Interaction With Deaf-Blind Persons • Eli Raanes
5 Overlapping Circles or Rather an Onion: The Position of Flemish Sign Language Interpreters Vis-à-Vis the Flemish Deaf Community • Eline Devoldere and Myriam Vermeerbergen
6 Striking a Cognitive Balance: Processing Time in Auslan-to-English Simultaneous Interpreting • Jihong Wang
7 Examining the Acoustic Prosodic Features of American Sign Language-to-English Interpreting • Sanyukta Jaiswal, Eric Klein, Brenda Nicodemus, and Brenda Seal
8 Reframing the Role of the Interpreter in a Technological Environment • Erica Alley
9 Deaf Employees’ Perspectives on Signed Language Interpreting in the Workplace • Paul B. Harrelson
Contributors
Index

Citation preview

The Second International Symposium on Signed Language Interpretation and Translation Research

Melanie Metzger and Earl Fleetwood, Editors Volume 1

From Topic Boundaries to Omission: New Research on Interpretation

Volume 2

Attitudes, Innuendo, and Regulators

Volume 3

Translation, Sociolinguistic, and Consumer Issues in Interpreting

Volume 4

Interpreting in Legal Settings

Volume 5

Prosodic Markers and Utterance Boundaries in American Sign Language Interpretation

Volume 6

Toward a Deaf Translation Norm

Volume 7

Interpreting in Multilingual, Multicultural Contexts

Volume 8

Video Relay Service Interpreters

Volume 9

Signed Language Interpreting in Brazil

Volume 10

More Than Meets the Eye

Volume 11

Deaf Interpreters at Work

Volume 12

Investigations in Healthcare Interpreting

Volume 13

Signed Language Interpretation and Translation Research

Volume 14

Linguistic Coping Strategies in Sign Language Interpreting

Volume 15

Signed Language Interpreting in the Workplace

Volume 16

Here or There

Volume 17

Professional Autonomy in Video Relay Service Interpreting

Volume 18

The Second International Symposium on Signed Language Interpretation and Translation Research

The Second International Symposium on Signed Language Interpretation and Translation Research Selected Papers Danielle I. J. Hunt and Emily Shaw, Editors

Gallaudet University Press

Washington, DC

Studies in Interpretation A Series Edited by Melanie Metzger and Earl Fleetwood Gallaudet University Press Washington, DC 20002 http://gupress.gallaudet.edu © 2020 by Gallaudet University All rights reserved. Published 2020 Printed in the United States of America ISBN (casebound) 978-1-944838-51-5 ISBN (ebook) 978-1-944838-52-2 ISSN 1545-7613 This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper). Cover design by Jordan Wannemacher.

Contents

Preface Danielle I. J. Hunt and Emily Shaw Chapter 1

Introducing Research to Sign Language Interpreter Students: From Horror to Passion? Annemiek Hammer, Jan Nijen Twilhaar, and Beppie van den Bogaerde

vii

3

Chapter 2

Interpreting in Ghana Elisa Maroney, Daniel Fobi, Brenda Puhlman, and Carolina Mmbro Buadee

Chapter 3

The Role of French Deaf Translators, Case Study: The Paris Attacks, November 13, 2015 Aurélia Nana Gassa Gonga

36

Use of Haptic Signals in Interaction With Deaf-Blind Persons Eli Raanes

58

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Overlapping Circles or Rather an Onion: The Position of Flemish Sign Language Interpreters Vis-à-Vis the Flemish Deaf Community Eline Devoldere and Myriam Vermeerbergen Striking a Cognitive Balance: Processing Time in Auslan-to-English Simultaneous Interpreting Jihong Wang Examining the Acoustic Prosodic Features of American Sign Language-to-English Interpreting Sanyukta Jaiswal, Eric Klein, Brenda Nicodemus, and Brenda Seal

20

80

108

132

Chapter 8 Chapter 9

Reframing the Role of the Interpreter in a Technological Environment Erica Alley

147

Deaf Employees’ Perspectives on Signed Language Interpreting in the Workplace Paul B. Harrelson

164

Contributors 181 Index

vi : contents

183

Preface Danielle I. J. Hunt and Emily Shaw

Over a beautiful spring weekend in 2017, more than 260 people from countries such as Ghana, Norway, Belgium, Sweden, Brazil, Panama, Haiti, and the United States converged at Gallaudet University to discuss current research in translation and interpreting studies. The second Signed Language Interpretation and Translation Research Symposium hosted by the Department of Interpretation and Translation’s (DoIT) Center for the Advancement of Interpretation and Translation Research (CAITR) brought together researchers, students, educators, and practitioners alike. The mission of CAITR is to cultivate activities that advance knowledge about signed language interpreting and translation research and its effect on communication for Deaf individuals. The purpose of the symposium was to promote the exchange of scholarship on these topics as well as provide a platform for interdisciplinary research across various fields, including linguistics, communication, sociology, psychology, anthropology, and education. The symposium not only showcased researchers who examine interpretation and translation from different theoretical frameworks, but also provided a rare opportunity for individuals from different cultural backgrounds to network and exchange ideas. This event was the second of its kind—the first symposium was held in 2014—but stood out as compared to its predecessor. Triple the number of proposals were submitted for this event, and attendees were not disappointed. After 3 days, registrants departed having seen 36 presentations and 32 posters about topics such as prosody, workplace interpreting, trust, social and pragmatic considerations, and linguistic flexibility, to name a few. Keynote speakers included Dr. Robert Adam of the United Kingdom, Dr. Beppie van den Bogaerde of the Netherlands, and Dr. Xiaoyan Xiao of China. On the first day, Dr. van den Bogaerde, an English-Dutch interpreter and linguist presented “Introducing Research to Sign Language Interpreter Students: From Horror to Passion?” which is featured as the vii

first chapter in this volume of selected works. Dr. Xiaoyan Xiao presented “Sign Lan–age on Chinese TV: Aw–eness and Ac–s, but Still M–sing the Mark” on the second day, where she discussed the difficulties of TV news interpreting in China. To begin the final day of the symposium, Dr. Adam summarized events from the Deaf Interpreter Summit held before the second symposium and then presented research related to Deaf practitioners in his presentation “Mind the Gap: What Is Missing for Deaf Interpreters and Translators?” A call for papers was announced at the conference requesting presenters submit manuscripts based on their talks. Several presenters submitted, and nine of these papers were selected and are presented here. In Chapter 1, “Introducing Research to Sign Language Interpreter Students: From Horror to Passion?” Annemiek Hammer, Jan Nijen Twilhaar, and Beppie van den Bogaerde focus on one interpreter education program in Hogeschool Utrecht, University of Applied Sciences in the Netherlands and their effort to infuse research into the undergraduate level of interpreting education. The chapter provides critical information about the foundation of this program, the content of its research curriculum, and directions for future applications of research-based interpreter education. Several ideas for next steps are provided, including training interpreter educators about the research process so they can better instruct students how to do so. The next chapter, “Interpreting in Ghana,” highlights current practices among interpreters as they navigate the relatively new interpreting profession in the country. Elisa Maroney, Daniel Fobi, Brenda Puhlman, and Carolina Mmbro Buadee collected surveys, conducted interviews, and documented observations of 13 individuals engaged in interpreting work to determine what interpreters are currently doing as well as how they entered the profession. Several efforts are underway to promote the education of more interpreters, as there is an increasing demand for skilled professionals to serve deaf people in Ghana. In the third chapter, “The Role of French Deaf Translators,” Aurélia Nana Gassa Gonga provides a fascinating account of the current state of Deaf translators, with special focus placed on their work in France after the Paris attacks on November 13, 2015. The events that occurred on this date were newsworthy; many television channels interrupted their programming to report on them. However, these programs were not closed captioned and did not include sign language interpreters on screen. Born of necessity, Deaf translators created a live Facebook page to report on viii : Preface

the events directly, and then to also interpret and translate news items via hearing interpreters and Deaf translators. In addition to analyzing the contents of the Facebook page, interviews with two of the Deaf translators helped to uncover the strategies they used to translate the content. In Chapter 4, “Use of Haptic Signals in Interaction with Deaf-Blind Persons,” Eli Raanes describes the form and interactive function of haptic signals from two interpretations involving deaf-blind consumers. The interpreted interactions were analyzed for their content, and then during postinteraction interviews, the deaf-blind participants revealed strong recollections of the setup of the room and the contributions of the respective participants. This chapter underscores the interactive benefits of haptic signals and sheds light on a crucial specialty in interpreting. To address a perennial issue of the composition of deaf communities, Eline Devoldere and Myriam Vermeerbergen in Chapter 5, “Overlapping Circles or Rather an Onion: The Position of Flemish Sign Language Interpreters Vis-à-Vis the Flemish Deaf Community,” explore where hearing sign language interpreters fit. Given recent shifts in the sociocultural and political landscape in Flanders, sign language interpreters are beginning to work more frequently in a wider variety of settings, and Deaf people have played a central role in selecting interpreters. Interviews and surveys of Deaf Flemish people were used to explore the notion of membership in the Deaf community and to see if hearing interpreters were included in that group. In Chapter 6, “Striking a Cognitive Balance: Processing Time in Auslan-to-English Simultaneous Interpreting,” Jihong Wang looks at 30 professional Auslan–English interpreters’ use of time lag when interpreting a formal Auslan speech into English at a national conference. She focused on two types of Auslan sentences: those containing numbers near or at the end and those ending with negation. Quantitative results were discussed using representative examples to illustrate how time lag was closely related to effective interpreting strategies. Qualitative results supported these findings. As expected, interpreters’ time lag varied from person to person and from place to place, but excessively long or short time lags proved to be problematic. As often expressed through anecdotes, this chapter provides much needed empirical evidence supported in its findings. One goal of interpreting is effectively producing a message in English that portrays the emotional affect of an American Sign Language (ASL) text. In Chapter 7, “Examining the Acoustic Prosodic Features of Preface  :  ix

American Sign Language–to–English Interpreting,” Sanyukta Jaiswal, Eric Klein, Brenda Nicodemus, and Brenda Seal look at transmitting congruent vocal prosody while interpreting by focusing on emotionally flat and emotionally dynamic narratives. For the first time, evidence of objective measures of vocal equivalence of affect and emotion represented visually in ASL is operationalized, measured, and analyzed using fundamental frequency representing pitch. Analyses of the data revealed expected variability in the means, standard deviations, and ranges of fundamental frequency required for emotionally flat and emotionally rich ASL in most, but not all, of the 32 interpreting samples. For years, many interpreters have been working in video relay service (VRS) settings. In Chapter 8, “Reframing the Role of the Interpreter in a Technological Environment,” Erica Alley uses elements of grounded theory to examine the work of VRS interpreters in relation to the federal and corporate constraints that govern their actions to discover how these constraints affect the VRS workplace. In-depth interviews were conducted with 20 ASL-English interpreters experienced with working in the VRS setting, and the data were analyzed for patterns using the constant comparative method. Deaf consumers’ viewpoints about interpreting are invaluable. In the final chapter’s qualitative descriptive study, “Deaf Employees’ Perspectives on Signed Language Interpreting in the Workplace,” Paul Harrelson examines the experiences of Deaf people in the workforce, specifically focusing on Deaf white-collar workers who use ASL as their dominant language and who use signed language interpreter–mediated communication in the workplace. Two focus groups with a total of eight Deaf individuals were conducted in Washington, DC, with midcareer professionals working in the federal government. Two overarching findings included that the Deaf workers expressed generally high levels of satisfaction with interpreter-mediated communication in the workplace and that Deaf workers are actively engaged in interpreting processes at every level to maximize satisfaction. Each chapter included in this volume allows for an international view of translation and interpreting studies. We believe that the research presented here captures the breadth of topics and depth of knowledge on display at the symposium, and although just a sample of what was available, this collection is a testament to the progress our field is making. There is a persistent need for advancing the profession of interpretation

x : Preface

and translation in Deaf communities. By considering the work interpreters and translators produce every day and critically examining the impact of this work on Deaf people, we can continue to push forward meaningful change and promote equal rights of all Deaf and deaf-blind people around the world.

View Signed Chapter Summaries

Visit the Gallaudet University Press YouTube Channel to view signed summaries of the chapters at www.youtube.com/GallaudetUniversityPress. Under Playlists, click “The Second International Symposium on Signed Language Interpretation and Translation Research: Chapter Summaries.” Ebook readers can click this link for instant access: http://ow.ly/svxX50yE1kg

Preface  :  xi

The Second International Symposium on Signed Language Interpretation and Translation Research

Chapter 1 Introducing Research to Sign Language Interpreter Students: From Horror to Passion? Annemiek Hammer, Jan Nijen Twilhaar, and Beppie van den Bogaerde

The earliest research on sign language (SL) interpreting dates to the mid1970s (Roy & Napier, 2015). The focus was first on the interpreting process itself and its many facets, both in a linguistic and communicative sense, and later on the role of the interpreter and ethical and social implications of interpreted interactions. The results of early research did not, as a matter of course, reach the practices of interpreter education; the translation from theory to practice was not commonly made. This was partly due to the fact that interpreter educators were usually practitioners themselves, often without a research background. Many interpreters, educators, and researchers have stressed the importance of professionalization of educators in SL interpreting programs and developing evidence-based curricula (e.g., European Forum of Sign Language Interpreters [efsli], 2013a, 2013b; Hale & Napier, 2013; Janzen, 2005; Monikowski, 2017; Wadensjö, Dimitrova, & Nilsson, 2007; Winston, 2013). It is hard for working SL interpreters (SLIs) to obtain a PhD degree, and yet this academic level is needed to train future interpreters with research skills. Monikowksi (2013) described how difficult it is to balance a career as a practitioner and a researcher. She sought an explanation for the lack of embedded research in teaching programs in the strict tenure-track system in academia and also indicated that there is “not much research to support our pedagogical approaches, our curricula or our course development” (Monikowksi, 2013, p. 9). Sometimes research that exists gets lost or is not used, as Winston described (2013, p. 178). Even if we have research on a particular issue, we still need knowledge translation from theory to practice, defined by Winston (2013) as “the capturing of practitioner wisdom and knowledge to apply 3

methodologically sound research approaches, thus translating it into evidence-based practice” (p. 179). Today, fortunately, we see a steady increase in interpreting research by practitioner researchers (or researcher practitioners). These researchers usually do not have a targeted degree in interpreting or translation but come from different disciplines, for instance, linguistics, psychology, sociology, anthropology, ethnology, and so on. This rich background forms a perfect basis to start to embed research in our curricula. One of the biggest challenges now is to incite enthusiasm for research and a critical attitude in our students. By developing intriguing courses and by challenging our students to continuously ask questions regarding why things are as they are, why we do things a certain way, and how we can improve our practice, we can continue to raise the quality of interpreter training. Therefore, we need to educate our interpreters in research skills. So how do we make them experts in practice-based research (Andriessen & Greve, 2014)? We are aware that there is a lot we do not know about teaching SL to hearing students in interpreting programs and a lot we do not know about the most effective interpreting techniques and skills, and we have not studied enough yet about how to actually pace SL teaching or interpreting education (but see McKee, Rosen, & McKee, 2014; Napier & Leeson, 2016; Napier, McKee, & Goswell, 2010). There is a wealth of practical knowledge and experience in the field, but there are only a few descriptions of how to actually shape SLI education (efsli, 2013a, 2013b). To our knowledge, no studies exist on how and to what extent research should be implemented into the curriculum. It should be implemented in such a way that after graduation from an interpreter program, young professionals can conduct research on their own and have the ability to improve their professional functioning and the field of practice as a whole. In this chapter, we wish to explore and describe the implementation of a research curriculum within the SL interpreting program of Hogeschool Utrecht, University of Applied Sciences in the Netherlands. We developed a 4-year learning program directed toward developing research skills at the undergraduate level. At present, our educators feel an urgent need to embed research in their SLI programs with two goals: first, to firmly base their teaching on the available research, and second, to teach future interpreters how to continuously improve their practice through research. The purpose of this chapter is to present our program because of the 4 : Hammer, Nijen Twilhaar, and van den Bogaerde

perceived urgency and the poor documentation on learning programs that focus on research skills. The outline of this chapter is as follows. We will start with a brief overview of how research became adopted in undergraduate education. Then, we will outline the setup of our research curriculum, including levels of proficiency and content of the curriculum. We will end this chapter with an evaluation of our research curriculum and future developments.

Evolving Research in Professional Education

The need for SLI programs to become more firmly based on objective evidence and to deliver reflective professionals follows a broad trend in higher education. This development started some 20 years ago and is rooted in evidence-based medicine that became well known during the 1990s. Evidence-based medicine was translated to evidence-based practice (EBP) in subsequent years (i.e., the ability of the professional to incorporate research evidence in professional decision making; Sackett, Rosenberg, Gray, Hynes, & Richardson, 1996). EBP greatly influenced the professions that involve interpersonal relations, such as social work, psychiatry, psychology, and education, where objective evidence needs to be interpreted in the face of individual values. It has received a great deal of attention in the literature. The movement toward evidence-based education started around the same time. Evidence-based education is aimed at designing educational programs and teaching methods that are shown to be effective through systematic research (Davies, 1999). It shares with EBP that decisions (in education) should be based on research, rather than on intuitions by individual educators. Numerous academic papers can be found on the subject (e.g., Hammersley, 2007; McMillan & Schumacher, 2010). Both movements call for educators to use research to inform their teaching practice and professionals to use research to inform their professional decisions. As such, the focus on research in the education of professionals already has some history. At the level of educational policy, top-down movements forced higher vocational education to adopt research in its curricula. The Netherlands has a binary system of higher education, which distinguishes between professional education (universities of applied sciences) and scientific education (research universities). As a direct result of the 1999 European Union Bologna Declaration, most European universities adopted the Introducing Research to SLI Students  :  5

bachelor/master system (Bologna Declaration, 1999). One outcome of the Bologna Declaration was that vocational/professional programs had to implement research-related subjects into their curricula, which were traditionally practice focused in nature. It also meant that a professional master’s degree—and in some cases PhD programs—had to be developed. The implications of the Bologna Declaration are taken further in the Dublin Descriptors (“Shared ‘Dublin’ Descriptors,” 2004), where research is explicitly mentioned for the first time: The word “research” is used to cover a wide variety of activities, with the context often related to a field of study; the term is used here to represent a careful study or investigation based on a systematic understanding and critical awareness of knowledge. The word is used in an inclusive way to accommodate the range of activities that support original and innovative work in the whole range of academic, professional and technological fields, including the humanities, and traditional, performing, and other creative arts. It is not used in any limited or restricted sense, or relating solely to a traditional “scientific method.” (p. 4) It was only in 2009 that research became obligatory for undergraduate students at universities of applied sciences1 in the Netherlands (Brancheprotocol Kwaliteitszorg Onderzoek [BKO], 2009). Obligatory means that research was admitted as one of the tasks of the university of applied sciences by law (“Wet op het hoger onderwijs,” 1992). [Research] at these universities are rooted in professional context. Research questions are formulated by professional practice in profit and non-profit sectors. Research outcomes aim at knowledge, insights and products that contribute to the solution of the problems observed in daily practice and/or to the development of daily practice. (translated from BKO, 2009, p. 4) This definition goes beyond the professionals’ ability to use research in daily practice but indicates that professionals have to perform research themselves. The reason for this is that societal issues are more complex, and professionals are key to finding innovative solutions to face these complexities. Research is fundamental to innovation. So, while EBP embarks professionals on a lifelong learning journey and a continuous search for new evidence in their field, the educational policy requires professionals to add to the body of evidence in their field as well. 6 : Hammer, Nijen Twilhaar, and van den Bogaerde

To come to an agreement on the level of research ability in professional education, Andriessen and Greve (2014) start with defining research as the act of methodologically answering predefined questions to develop new knowledge (p. 4). New knowledge can be new to the student or the client, new to a particular discipline, or new to the world, and these levels are the first dimension that can be used to demarcate the type of research that we expect from professional undergraduate students (Andriessen & Greve, 2014, p. 4). A second dimension is the extent to which the results can be generalized to other situations or people. Both dimensions are illustrated in Figure 1.1. We will come back to this point later in this chapter. To educate professionals who are both able to perform their profession and to do research poses serious challenges at the level of curriculum design. First, a gap is observed between research and educators in traditional vocational higher education without a research background. Many of these educators are unaware of the state-of-the-art in their subject, limiting research to shape curriculum design and content. Particularly in traditional vocational higher education, most educators were drawn from the professional field, sometimes without specific training in research. To encourage a more academic culture, educators in the Netherlands are obliged to obtain a master’s degree if they do not hold such a degree yet. In addition, professors and those with PhDs are appointed at the higher educational institutes to develop research programs, to set up courses directed at research skills, and to assist educators in implementing research skills in their courses. These efforts have resulted in the general appreciation of research skills in higher vocational education by educators, where the use of research skills was formerly

Levels of complexity of research and their role in HPE External validity: New to:

Not generalizable beyond situations or people studied

the world the discipline the client the student

Generalizable to a small, well-defined group

Generalizable to a wide group

PhD level Professional master level Professional bachelor level

Figure 1.1.  Three levels of professional research in higher professional education (based on Butter, 2013). Introducing Research to SLI Students  :  7

disputed by them. Nevertheless, academic culture is only slowly evolving, probably due to the lack of PhDs in the field of SL interpreting in the Netherlands (see also van den Bogaerde, 2013). The associate professors at our institute are responsible for the professionalization of our educators, with the Deaf studies’ full professor’s support. A second challenge is that evidence is lacking on how to approach research skills in curriculum design. There are guidelines formulated by the Ministry of Education, Culture, and Science, such as that teaching research skills should be incorporated into the regular program as much as possible or that research tasks should be included in real-life assignments (Andriessen & Greve, 2014), but systematic observations of outcomes involving the students’ progression of research abilities are not available. All in all, higher vocational institutions in the Netherlands have formulated ambitious goals regarding research skills for professionals yet struggle as to how to implement research in bachelor’s programs. Against this backdrop of evolving research in professional education, we started to embed research more firmly in our SLI curriculum beginning in 2010 (Nijen Twilhaar, 2010), based on a research curriculum used elsewhere in our university (Raamontwerp, 2006). Like other SLI educators at that time (e.g., Monikowski, 2013), we had to start with limited research showing how to tackle this task. We argued that, if one wants to measure outcomes over time in the development of student research skills, we need to define the levels of research skills students must attain. Thus, we defined levels of research abilities for our bachelor’s program. In addition, we redefined professional competencies to include research abilities as an elementary skill for future SLIs.

Evolving Research in the SLI Program

Revisiting Professional Competencies We devised interpreter competence schemas with descriptors that are linked to relevant courses in the program. Our 4-year bachelor’s program for SLIs is based on the concepts of competency-based learning and training. According to this educational model, learning is aimed at the achievement of professional knowledge and skills at a predefined level of proficiency. As such, both the curriculum and assessment are organized around the students’ professional outcomes and their progress along a series of milestones to achieve these outcomes (Eraut, 1994; Lizzio & 8 : Hammer, Nijen Twilhaar, and van den Bogaerde

Wilson, 2004). These have helped us streamline the various professional skills needed in interpreter education, as concerns language, linguistics, interpreting, and interpersonal skills (IETB, 2013). Based on national and international professional standards (e.g., Nederlandse Beroepsvereniging Tolken Gebarentaal, efsli), we formulated seven competencies for SLIs: (a) interpersonal competencies, (b) organizational competencies, (c) competent to collaborate with colleagues, (d) competent to collaborate with clients and their environment (e.g., Deaf community), (e) competent in reflection and development, (f) competent in interpreting techniques and skills, and (g) vocational competencies. Of interest here is the competence scheme “competent in reflection and development,” which describes yearly goals regarding research abilities. To set up our yearly goals, we used Dreyfus’s model (2004) of adult skill acquisition (Figure 1.2). In this model, the proficiency levels go from novice (level 1, corresponding to the first year), to advanced beginner (level 2, corresponding to the second and third years), to competent (level 3, corresponding to the fourth year). Within each level, the essential learning outcomes and behavioral indicators (i.e., operational descriptions of these outcomes) are listed. The competence scheme is presented later in Figure 1.3. As can be seen from Figure 1.3, research abilities are ordered from simple (i.e., focus on raising awareness of the need for research and developing a critical mind) to more complex (i.e., focus on how to apply research to be innovative in professional life). With this ordering, we follow Bloom’s taxonomy on cognitive learning (Anderson & Krathwol, 2001). In addition, research abilities are ordered in relation to a student’s

Intuition

Holistic

Analytical

Decomposed

Expert Proficient Competent Advanced Beginner Novice

Figure 1.2.  Dreyfus’s model of skill acquisition (2004). Introducing Research to SLI Students  :  9

COMPETENCY IN REFLECTION AND DEVELOPMENT The interpreters in Nederlandse Gebarentaal (NGT; Sign Language of the Netherlands) are responsible for their further development and professionalization. They regularly reflect upon their professional attitudes and professional proficiency and endeavor to keep their professional practice up to date and to improve it (evidence-based practice). They name the norms, values, and opinions that form the core of their interpreting work. They are well able to identify their strong and weak points and work on their further development in a planned way. They make use of working groups in association and in-service training to promote their own development. Level 1

Level 2

The students recognize the The students evaluate their importance of doing research work and views, assisted by themselves and of developing colleagues and supervisors. their views and actions. They They connect theory and study and formulate these, practice. The students

Level 3 The students study their work independently and systematically, taking into consideration feedback of colleagues and supervisors. They are aware of recent developments in their field

among other things, by a strength/weakness analysis, evaluation, reflection, and feedback.

recognize the importance of self-reflection on their

Indicators: the student . . . - describes his strong and weak points. - reflects on own behavior and takes feedback from others into consideration. - knows how to indicate which aspects of own competencies can be improved. - explores and describes developments of the profession. - becomes aware of the influence of norms and values in the relation between people.

Indicators: the student . . . Indicators: the student . . . - follows developments in - reflects independently his or her professional on own behavior and field. takes feedback from others into consideration. - employs various ways to further develop his - works on own or her professional development in a skills (literature, planned manner. workshops, seminars, - is flexible, adapts to etc.). changing - is open to other circumstances, and visions and ideas and knows some makes decisions alternatives. whether to use these - is open to other visions or not. and ideas. - describes own quality - formulates own vision on interpreting and the and restrictions related values and norms to to actual situations. which he or she - consciously adheres. employs own - is aware of the influence norms and values of norms and values in with regard to interpreting. relationships between people in an interpreting setting.

professional skills.

of practice and adapt their actions accordingly.

Figure 1.3.  Reflection and development competencies overview. (© IGTD, Study Guide Sign Language Interpreter 2017–2018.) 10 : Hammer, Nijen Twilhaar, and van den Bogaerde

autonomy: from a low degree of autonomy to a high degree of autonomy. In this respect, we follow the Research Skill Development Network as proposed by Willison and O’Regan (2007). This network illustrates how students can participate in each step of the research cycle; for instance, students collecting required information or data using a prescribed methodology (low degree of autonomy) to students collecting information or data by choosing or devising appropriate methodology by themselves (high degree of autonomy) (for an overview of the Research Skill Development Network, see https://www.adelaide.edu.au/). Operationalization of a Research Curriculum in Nederlandse Gebarentaal (NGT; Sign Language of the Netherlands) Interpreter Education The competence scheme in Figure 1.3 provides milestones with respect to research abilities but was, of course, in need of further refinement. As such, we developed our research curriculum based on four aspects that are relevant to the professional attitude of the SLI. By taking this approach, we underline the nature of practice-based research (i.e., research is always connected to the student’s profession). As the research curriculum is woven into different courses throughout the 4-year program, the students’ research skills are immediately applicable to their future domain of practice. After we have described the goals of the professional attitude in the bachelor’s SLI students, the research curriculum per year will be discussed. During the 4-year program, the SLI student needs to develop four professional attitudes, in particular in the research part of the courses. These professional attitudes can be considered as goals per year of the research curriculum (Raamontwerp, 2006). 1.  2.  3.  4. 

An inquisitive (or critical) basic attitude (year 1) A critical professional attitude (year 2) A critical/reflective professional attitude (year 3) An innovative professional attitude (year 4)

Table 1.1 shows which subjects contain research aspects per year. The study load (number of study hours for the students) per course is indicated in the European Credit Transfer System (ECTS) first, followed by the credits for the research part. The ECTS system resulted from the Bologna Declaration to support easy comparison of study load per course Introducing Research to SLI Students  :  11

Table 1.1. Embedded Research Curriculum in SLI NGT Program (ECTS Course/ECTS Research). Research Curriculum Interpreter NGT Year 1 2 3 4

Designated research courses in SLI NGT program Social awareness (5/1) Deaf Studies 2 (10/2) Interpreting skills 6 (5/2) Thesis (15/10)

across Europe. In our institute, one ECTS stands for 28 hours of study (including class, homework, and practice) for the student. Four-year bachelor’s programs in the professional universities consist of 240 ECTS in total. The research curriculum starts with the course titled “Social Awareness” (Nijen Twilhaar, 2011a). In this course, interpreting students of NGT are challenged to become explicitly aware of the society in which they participate: How does this society function, what roles are played by culture and history, and in what way does this awareness affect their own behavior? The students are encouraged to develop a critical attitude by discussing social questions relevant to their practice (Carr & Kemmis, 2003), for instance, how policies of government or institutions are formulated with regard to people with a disability (United Nations, 2006). Critical thinking is connected to basic principles of research because critical thinking is inextricably linked to the accurate analysis of claims by others and the consultation of good sources for sound reasoning. Through the digital part of the course, titled “Information Skills,” students learn how to independently consult sources, how to use references in papers, and how to write up a reference list according to the American Psychological Association guidelines. In short, the first-year students develop an inquisitive basic (or critical) attitude, which is manifested in knowledge about the importance of research and demonstrated in a first small-scale study. The research curriculum continues in the second year in the course titled “Deaf Studies 2” (Nijen Twilhaar, 2011b), which addresses the emergence and form of our multicultural society (i.e., Deaf culture and, specifically, the subgroup of deaf-blind people). Students learn to identify argumentation patterns in the prescribed literature, which covers a diversity of themes. Based on these patterns, they can formulate their own arguments to support their point of view. The development of a critical 12 : Hammer, Nijen Twilhaar, and van den Bogaerde

professional attitude is stimulated in this way. In addition, students learn to critically reflect on topics in their professional practice by performing a small study. Research skills are extended in year 2 by learning how to make and execute a research plan. To this end, students are taught about the different types of research (e.g., qualitative versus quantitative methods). They also acquire skills in data interpretation in a brief course on basic statistics. In the third year, the students are assigned to perform a literature review (Hammer & Nijen Twilhaar, 2012). This is part of the goal that students develop a reflective professional attitude. Students are encouraged partly by providing them with the professional literature and partly by having them search the literature themselves in the course titled “Interpreting Skills 6.” Classes focused on doing literature searches are interchanged with classes on information science, where theory is directly linked to practice via (social) media techniques. Concretely, this means that the draft of their literature review is discussed: Why this topic, and how will the student funnel the information? The draft is revised in the university’s library, where students learn about sources of quality and consulting search systems. The gathered literature is discussed in class again, and the initial steps toward a work (writing) plan are set. During the final class, in the library, the students can ask the librarians about literature searches and the use of sources. Specific attention is paid to academic writing and referencing. The classes on research are closely connected to the classes on professional content in the “Interpreting Skills 6” course. And based on their literature review, the students have to give a short presentation about a relevant theme in “Deaf Studies 2” (e.g., the role of SL in the era of cochlear implants). The research curriculum is completed in the fourth year by an independent study resulting in an end product, usually a thesis (Hammer & Nijen Twilhaar, 2013). In this phase, the students can apply their acquired knowledge and research skills to do innovative research on a relevant topic in the field of interpreting. Innovative should be interpreted here in the sense of “new to the student or the client” (Andriessen & Greve, 2014, p. 4), as discussed in the beginning of this chapter. Students receive substantive support during the writing process of the end product in a part of the course where different aspects of interpreting are discussed. Students can choose one of these aspects as a topic for research but are also free to show their own interests, creativity, and insights in the form of an individual research question. A research plan is developed under the Introducing Research to SLI Students  :  13

supervision of a teacher, and after approval of the design by one of the professors, the student can start the study. As much as possible, students work in thematic groups under the supervision of one teacher. The students are offered several classes on setting up research before they start on the research plan, with a focus on finding sufficient and appropriate topics. A good organization of the supervision of the end product is, of course, a prerequisite. This aspect is twofold. Not only do the students need appropriate supervision, but also the supervising teachers need further training to optimally support and supervise the bachelor students in planning and executing their research. Some of the research of the end products is tied in with the research program of the professorship in Deaf studies, in which the associate professors also participate.2

Current State and Future Avenues

The ultimate aims of embedding research into the professional curriculum are to empower the newly graduated SLIs to continuously reflect on and improve their own functioning in a systematic way and to provide them with the tools to improve their professional field. The first students to graduate in the new curriculum did so in 2014. The quality of our students’ research (i.e., thesis) was evaluated by the examination board in 2017 (Examination Board IGTD, 2017). In the Netherlands, the examination board is an independent committee that consists of teachers of the institute and an external member (usually an expert on assessments). One of its tasks is to assure the quality of assessments, in particular those assessments that are part of the graduation phase. The evaluation of the examination board of the Institute of Sign Language & Deaf Studies (performed by the external member to avoid conflicts of interest) was positive with respect to the level of research that students showed in their theses. The external board member judged that the research skills of our undergraduate students were in accordance with international levels for bachelor’s education. In addition, research reports were properly associated with formulated learning outcomes and levels of proficiency (see earlier in chapter). This evaluation reinforces our curriculum setup with respect to research skills. Research by associate professors indicated that students’ research skills are insufficient to deal with the complexities of their professional 14 : Hammer, Nijen Twilhaar, and van den Bogaerde

practice, although our students show appropriate research skills (i.e., they can execute the empirical cycle) (Hammer, 2018). However, they generally lack the ability to implement their results into professional practice. In other words, they know how to conduct research (e.g., setting up methodology, collecting and analyzing data) but experience difficulties in how to put their newly generated knowledge into practice. This is shown by the poor recommendations with which they conclude their thesis; the recommendations generally do not target the problem for which the research was performed initially. Interestingly enough, using research to innovate professional practice is exactly the aim of research for our undergraduate students (see earlier in chapter). Essentially, we have not achieved the innovative professional yet (see earlier in chapter). This is widely recognized across bachelor’s education in the Netherlands and in need of further investigation (Dutch Ministry of Education, Culture, and Science, 2015). We argue that the academic culture is evolving too slowly at our university. We performed a SWOT (strengths, weaknesses, opportunities, and threats) analysis within our team of teachers who supervise student research (Hammer & Nijen Twilhaar, 2017). This indicated that these teachers feel incompetent to perform research themselves or to supervise students in their research project, despite the fact that the whole team obtained master’s degrees. It is acknowledged that if we want students to have an innovative and critical attitude, teachers should also have an innovative and critical attitude. However, such an attitude is insufficiently manifested (Beishuizen, Spelten, & van der Rijst, 2012). If we want to improve research quality in our bachelor’s education, we need to train our educators. We received a grant to develop a training program for educators to implement research skills in their courses (Hammer, 2018). Professionalization of teachers’ research skills positively contributes to academic culture (Furtak, Seidel, Iverson, & Biggs, 2012). For primary education, it has been shown that teachers who approach their teaching methodologically (i.e., showing their research skills) are also the ones who excite curiosity in students (Uiterwijk-Luijk, 2017). Improving academic culture is likely to enhance innovation skills in students as teachers create more opportunities for students to show them how theory, obtained through research, can be used in practice (Willison & O’Regan, 2007). The professionalization of the teachers in our program is steadily progressing: we have two PhD studies going on by teachers, one on NGT as L2 pedagogy and the other on labor participation by bilingual Deaf Introducing Research to SLI Students  :  15

people. In addition, more teachers are willing to set up small studies in order to improve our teaching and curriculum. Furthermore, teachers who supervise student research receive training on the job by the associate professors of our institute (first and second authors of this chapter). Griffioen (2013) indicated that professionalization on the job was most effective for teachers to become more skilled in research. As such, professionalization is targeted at individual teachers. The board of the institute invests in research by giving time to teachers to conduct research or to be professionalized in supervising research. We hypothesize that this investment will lead to a more academic culture in due course and, as a consequence, to SLIs that are able to innovate their practice by doing research. Our future studies are intended to monitor the process of academic acculturation at our institute and its effect on the quality of students’ research.

Notes

1. Formerly also known as polytechs, professional or vocational academies, or Fachhochschule. 2. We are currently investigating how research aspects can be embedded in all courses; that is, not only in the designated research courses (see Table 1.1).

References

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Boston, MA: Allyn and Bacon. Andriessen, D., & Greve, D. (2014). Incorporating research in professional bachelor programmes. Retrieved from https://www2.mmu.ac.uk/media/mmuacuk/ content/documents/carpe/2013-conference/papers/quality-assurance-inhigher-education/Daan-Andriessen.pdf Beishuizen, Y., Spelten, E., & van der Rijst, R. (2012). Professionaliteit van docenten: Academische houding in het hbo. [Teachers’ professionalism: Academic attitude in professional universities]. Tijdschrift voor Hoger Onderwijs, 30, 245–258. Bologna Declaration. (1999). The Bologna process: Setting up the European higher education area. Retrieved from http://eur-lex.europa.eu/legal-content/ EN/TXT/HTML/?uri=URISERV:c11088&from=EN

16 : Hammer, Nijen Twilhaar, and van den Bogaerde

Brancheprotocol Kwaliteitszorg Onderzoek (BKO). (2009). [Branch Protocol Quality Assurance of Research]. Retrieved from http://www.vereniginghogescholen.nl/system/knowledge_base/attachments/files/000/000/201/original/ Brancheprotocol_Kwaliteitszorg_Onderzoek_2009_%E2%80%93_2015. pdf?1439892902 Butter, R. (2013). Role of research in higher prof. educ. on BA, MA & PhD levels is demarcated by the scope of the required relevance & rigour claims. #mmucarpe [Twitter post] Retrieved from https://twitter.com/renepbutter/ status/398695835288223744 Carr, W., & Kemmis, S. (2003). Becoming critical: Education knowledge and action research. London, United Kingdom: Routledge. Davies, P. (1999). What is evidence-based education? British Journal of Educational Studies, 47, 108–121. Dreyfus, S. E. (2004). The five-stage model of adult skill acquisition. Bulletin of Science Technology & Society, 24(3): 177–181. DOI: 10.1177/ 0270467604264992 Dutch Ministry of Education, Culture, and Science. (2015). Strategic agenda higher education and research 2015–2025. De waarde(n) van weten [The value(s) of knowledge]. Retrieved from https://www.government.nl/ documents/reports/2015/07/01/the-value-of-knowledge Eraut, M. (1994). Developing professional knowledge and competence. London, United Kingdom: Routledge Falmer. European Forum of Sign Language Interpreters. (2013a). Assessment guidelines for sign language interpreting training programmes. Brussels, Belgium: Author. European Forum of Sign Language Interpreters. (2013b). Learning outcomes for graduates of a three year interpreting training programme. Brussels, Belgium: Author. Examination Board IGTD. (2017). Eindproducten bacheloropleidingen Leraar/ Tolk NGT [Theses bachelor programs teacher/interpreter sign language of the Netherlands]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Furtak, E. M., Seidel, T., Iverson, H., & Biggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta analysis. Review of Educational Research, 82, 300–329. Griffioen, D. M. E. (2013). Research in higher professional education: A staff perspective (Doctoral dissertation). Retrieved from https://dare.uva.nl/ search?identifier=d713745c-4429-4fc3-8453-41a5ce80f319 Hale, S., & Napier, J. (2013). Research methods in interpreting: A practical resource. London, United Kingdom: Bloomsbury. Hammer, A. (2018). Leren innoveren: Een digitale leeromgeving voor onderzoek voor studenten en docenten in het hbo. [Learning to innovate: A digital

Introducing Research to SLI Students  :  17

learning environment for research for students and educators in professional higher education]. Comenius Senior Fellow grant no. 405.18865.453. The Hague, the Netherlands: Ministry of Education, Culture, and Science. Hammer, A., & Nijen Twilhaar, J. (2012). De onderzoekscomponent van Taalkunde 3 en Tolkvaardigheden 6: Literatuuronderzoek. [The research component of Linguistics 3 and Interpreting skills 6: Literature review]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Hammer, A., & Nijen Twilhaar, J. (2013). De onderzoekscomponent van het vierde jaar: De Afronding. [The research component of the 4th year: Graduation phase]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Hammer, A., & Nijen Twilhaar, J. (2017). Herijken van de Afronding. [Reassessment of the graduation phase]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Hammersley, M. (2007). Educational research and evidence-based practice. London, United Kingdom: Sage. IETB. (2013). Competencies interpreter NGT. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Janzen, T. (2005). Topics in signed language interpreting. Amsterdam, the Netherlands: John Benjamins. Lizzio, A., & Wilson, K. (2004). Action learning in higher education: An investigation of its potential to develop professional capability. Studies in Higher Education, 29, 469–488. McKee, D., Rosen, R. S., & McKee, R. (2014). Teaching and learning signed languages. London, United Kingdom: Palgrave Macmillan UK. McMillan, J., & Schumacher, S. (2010). MyEducationLab Series. Research in education: Evidence-based inquiry (7th ed.). Hoboken, NJ: Pearson. Monikowski, C. T. (2013). The academic’s dilemma: A balanced and integrated career. In E. A. Winston & C. Monikowski (Eds.), Evolving paradigms in interpreter education (pp. 1–27). Washington, DC: Gallaudet University Press. Monikowski, C. T. (2017). Conversations with interpreter educators. Washington, DC: Gallaudet University Press. Napier, J., & Leeson, L. (2016). Sign language in action. New York, NY: Palgrave Macmillan. Napier, J., McKee, R., & Goswell, D. (2010). Sign language interpreting. Sydney, Australia: The Federation Press. Nijen Twilhaar, J. (2010). Een onderzoekslijn in de bachelor van het Instituut voor Gebaren, Taal and Dovenstudies. [A research curriculum in the bachelor of the Institute for Sign, Language & Deaf Studies]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht.

18 : Hammer, Nijen Twilhaar, and van den Bogaerde

Nijen Twilhaar, J. (2011a). De onderzoekscomponent van sociaal bewustzijn. [The research component of social awareness]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Nijen Twilhaar, J. (2011b). De onderzoekscomponent van Dovenstudies 2. [The research component of Deaf Studies 2]. Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Raamontwerp, F. G. (2006). General design Faculty of Health Care, version 2.4. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Roy, C. B., & Napier, J. (Eds.). (2015). The sign language interpreting studies reader. Amsterdam, the Netherlands: John Benjamins. Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312, 71–72. Shared “Dublin” descriptors for short cycle, first cycle, second cycle and third cycle awards. (2004, October 18). Retrieved from https://www.uni-due.de/ imperia/md/content/bologna/dublin_descriptors.pdf Study Guide Sign Language Interpreter. (2017–2018). Internal report IGTD. Utrecht, the Netherlands: HU University of Applied Sciences Utrecht. Uiterwijk-Luijk, E. (2017). Inquiry-based leading and learning (Doctoral dissertation). University of Amsterdam. Ridderkerk, the Netherlands: Ridderprint. United Nations. (2006). UN Convention on the Rights of Persons with Disa­ bilities. Retrieved from http://www.un.org/disabilities/documents/convention/ convoptprot-e.pdf van den Bogaerde, B. (2013). Changing our attitude and position: Commentary to C. Monikowski’s “The academic’s dilemma.” In E. A. Winston & C. Monikowski (Eds.), Evolving paradigms in interpreter education (pp. 28–32). Washington, DC: Gallaudet University Press. Wadensjö, C., Dimitrova, B. E., & Nilsson, A.-L. (Eds.). (2007). The critical link 4: Professionalisation of interpreting in the community. Amsterdam, the Netherlands: John Benjamins. Wet op het hoger onderwijs en wetenschappelijk onderzoek, artikel 1.3. [Law on higher education and scientific research]. (1992). Retrieved from http:// maxius.nl/wet-op-het-hoger-onderwijs-en-wetenschappelijk-onderzoek/ artikel1.3 Willison, J., & O’Regan, K. (2007). Commonly known, commonly not known, totally unknown: A framework for students becoming researchers. Higher Education Research & Development, 2, 393–409. Winston, E. A. (2013). Infusing evidence into interpreter education: An idea whose time has come. In E. A. Winston & C. Monikowski (Eds.), Evolving paradigms in interpreter education (pp. 164–187). Washington, DC: Gallaudet University Press.

Introducing Research to SLI Students  :  19

Chapter 2 Interpreting in Ghana Elisa Maroney, Daniel Fobi, Brenda Puhlman, and Carolina Mmbro Buadee

Ghana is a small country in West Africa, composed of approximately 92,000 square miles. According to Data Productions Unit, Ghana Statistical Service (2016), the population is approximately 28 million. Approximately 211,700 Ghanaian people have speech and hearing “impairments” (Ghana Census, 2010, as cited by Hadjah, 2016, p. 3). Numerous indigenous languages are spoken in Ghana, and at least three signed languages are used: Ghanaian Sign Language (GSL), Adamorobe Sign Language, and Nanabin Sign Language (Hadjah, 2016). Historically, GSL is related to American Sign Language (ASL). According to Hairston and Smith (1983), in 1957, Andrew Jackson Foster, the first black student to graduate from Gallaudet College, traveled to West Africa on a mission trip. He established the first school for the deaf in West Africa in Osu, Ghana, where he lived for about 1 year, teaching at the school he established. He presumably used ASL, which then permeated Ghanaian deaf education (see Ilabor, 2010). As GSL has developed and spread, it has become the most commonly used form of signed language in Ghana (see Hadjah, 2016, for more about the current status of GSL). According to Oppong and Fobi (2019), Ghana has approximately 15 schools for the deaf, and inclusive education was implemented officially in 2015. There are seven public tertiary institutions. The University of Education, Winneba (UEW), a university with about 45,600 students, has the largest population of deaf/Deaf students at about 50. In 2015, the university employed three full-time interpreters and five part-time interpreters. The university has anywhere from 2 to 11 interns during any given semester. They also have one to two recent graduates fulfilling their National Service duty as a signed language interpreter. National Service

20

is a 1-year mandatory service for recent graduates of tertiary institutions (i.e., universities, polytechnics, colleges of education, and nursing) in Ghana. This service provides graduates with practical exposure on the job, both in public and private sectors, as part of their civic responsibilities. They are usually paid a nontaxable allowance at the end of every month. The amount is usually based on what will be approved by the Ministry of Finance. Currently, they receive 559.04 Ghanaian cedis (or $125.91 US dollars) per month. Little to no organized interpreting services are available in inclusive schools (Fobi  & Oppong, 2018; Oppong  & Fobi, 2019), and no tertiary-level interpreter education is available (Oppong & Fobi, 2019). In this present study, current practices of interpreters working in Ghana and the pathways to becoming employed as a professional interpreter were investigated using three methods: survey, interview, and observation. Data were collected from interpreters, signers who work as interpreters, National Service interpreters, and volunteer interpreters to get an overall sense of the state of interpreting and interpreter education in Ghana. Interpreter observations and interviews took place using the DemandControl Schema Observation-Supervision framework (Dean & Pollard, 2013). We are reporting on data that have been collected from 13 Ghanaian interpreters. In addition, three interpreters were interviewed in English using a semistructured interview format. The interviews were recorded and transcribed. Limitations

The sample size is small, with only 13 respondents. Due to unreliable internet service, survey data were collected face to face, limiting collection to those interpreters who were able to meet us in person in the Central Region. The Ghanaian interpreters primarily work in postsecondary settings. All interpreters available at the time of the study happened to identify as Christians, which is not surprising, because about 71.2% of Ghana’s population is Christian (Ghana Statistical Service, 2012). Another possible reason for the preponderance of Christian interpreters in the study is that, in Ghana, most of the interpreters seem to be either trained by the Church of Christ or Jehovah’s Witnesses.

Interpreting in Ghana  :  21

Literature Review

In Ghana, Oppong, Fobi, and Fobi (2016) explored deaf students’ perceptions about the quality of signed language interpreting services rendered in a public tertiary institution. The study focused on students who are deaf and who use signed language interpreting services. A descriptive survey design was adopted to elicit from respondents their views about the quality of signed language interpreting services they received. A 15-item questionnaire that used a four-point Likert scale was the instrument used to gather data for the study. Out of a target population of 34 respondents, 23 participated in the study. The study revealed that the quality of signed language interpreting services rendered to students who are deaf at the institution was not satisfactory; thus, the need for action to be taken to improve upon interpreting services was demonstrated. In addition, the problem of dissatisfaction among consumers of interpreting services arose because interpreters were not qualified enough given that they did not undergo requisite training. The study recommended that the institution must take steps to ensure that interpreting as a general program of study is introduced and implemented in the curriculum to train qualified interpreters for the deaf/Deaf population. In addition, the institution should employ and retain experienced signed language interpreters and provide them with the needed support to carry out their duties. Adu (2016) conducted a study on the social and academic experiences of students who are deaf at the UEW. Fourteen students who were deaf were purposively selected from a population of 36 students. Data were gathered through a semistructured interview guide. Findings of the study indicated that although some deaf students saw social gatherings as avenues for mingling with their hearing colleagues and as a strategy to learn their ways of doing things, other students did not see social gatherings in this way. In addition, some deaf students indicated that they felt isolated in the midst of hearing students because of the communication divide. The deaf/Deaf participants revealed that the benefits they derived from signed language interpreters and notetakers were enormous, even though the interpreters were sometimes absent from their lectures. When the interpreters were not present to provide services, the deaf students engaged the services of students who could sign to interpret during the lecture. The study revealed that deaf/Deaf participants were often given prior notice 22 : Maroney, Fobi, Puhlman, and Buadee

before their assessment dates and were assessed on subject content covered in interpreted lectures. Findings of the study also revealed that, at the tertiary level, students’ preferences varied in regard to simultaneous and consecutive interpreting. Eight of the participants indicated that they preferred consecutive interpreting, whereas five of the participants indicated that they preferred simultaneous interpreting. Only one participant revealed that his or her choice for either consecutive or simultaneous interpreting was dependent on the subject matter under discussion and the teaching method of the lecturer. Results of the study revealed that, when postlingual Deaf students attended lectures, they were compelled by lecturers to use their voices instead of signed language. In addition, participants indicated that sometimes some lecturers and hearing students mocked them because of their inability to hear and speak. The study recommended that, as the number of students who are deaf increases, the university should employ the same number of persons or, at least, enough persons who can assist in signed language interpreting and notetaking. In addition, the university should sensitize and educate lecturers, staff, and students on a regular basis as to how to include Deaf students in academic settings. Mantey (2011) explored the experiences of pupils with postlingual hearing loss at the University Practice South Inclusive School, Winneba, Ghana. Mantey employed qualitative methodology in which a case study design was used with interviews and observation to collect data about students in upper primary (or elementary) classes. A sample of five pupils with postlingual deafness was involved. Findings from the study revealed that the pupils with postlingual hearing loss did not have access to facilities that enhanced their success at the inclusive school. There were no positive interactions between the pupils with postlingual hearing loss and their hearing peers as a result of the communication gap. The study further revealed that teachers interacted with and demonstrated positively toward pupils with postlingual hearing loss. Mantey recommended that teachers should create opportunities in the classroom that encourage frequent peer interaction and general social skills development. Teachers and pupils should learn to communicate effectively with pupils who use signed language. Methodology

The goal of the research presented in this chapter is to describe current practices of interpreters working in Ghana and the state of interpreter Interpreting in Ghana  :  23

education or pathways to becoming employed as a professional interpreter. This study was conducted using three methods: a survey, an interview, and observation. Data collection took place over the course of 8 months in 2016. The research project is ongoing. In this chapter, results from 13 surveys and personal accounts (interviews) will be reported. Survey Trine (2013) developed survey and interview questions to investigate interpreting practices and interpreter education in Jordan. The questions were modified for use in a developing country and in a broader population of participants. Data were collected from 13 Ghanaian interpreters to get an overall sense of the state of interpreting and interpreter education in Ghana. This research was primarily qualitative, although the survey did bare some quantitative data. Interpreter Interview In the second phase of the current study, four interpreters were interviewed using a semistructured interview format. Portions of two of the interviews are provided later in this chapter in the form of narrative by authors Buadee and Fobi. Data Analysis Data from the survey were analyzed. The demographics of the participants are presented, followed by the responses to open-ended questions. We used grounded theory (Strauss  & Corbin, 1998) to analyze the responses to the open-ended questions collected in the surveys. Using grounded theory as a foundation allowed theory to emerge from the data. We looked at the responses from practitioners and identified patterns and coded themes that primarily aligned with Demand-Control Schema (Dean & Pollard, 2001, 2013).

Findings

In this section, the demographic data were described. These data include who the participants were and the settings where they worked. 24 : Maroney, Fobi, Puhlman, and Buadee

In addition, the themes that emerged from the surveys were provided. Rather than reporting on the interviews, one of the coauthors, Buadee, shares her firsthand experience as a volunteer interpreter, and coauthor Fobi describes his perspective as a signed language interpreter and the coordinator of interpreting services at the UEW. Demographics The interpreters ranged in age from 23 to 35 years old, with an average age of 29 years. Nine interpreters were male, and four were female. All reported that they were Christian. All reported that they were multilingual, with signed language as one of their working languages. For example, it was not uncommon to get a response from a participant who used a number of spoken languages, including English, Ga, Twi, and Fanti, as well as GSL. They tended to work between GSL and a spoken language (English and/or an indigenous spoken language). In regard to education, one interpreter had a master’s degree, two had bachelor’s degrees, three had diplomas (indicating completion of higher education in technical, vocational, and liberal arts disciplines that do not result in bachelor’s degrees), and the remaining seven were students in tertiary institutions. All but three of the Ghanaian interpreters were affiliated with the university. Only one was employed as an interpreter full time. Two were beginning their 1-year compulsory National Service as interpreters. Seven were beginning their internship as interpreters. Three were community interpreters who were unaffiliated with the university. Settings The settings where interpreters work are the same as in any other part of the world. Those settings include church/religious, education (lecture halls), community, social, conference, court, political campaign, and police station settings. The respondents indicated that, in their current practice as signed language interpreters, their employment may be full time or part time. The only interpreters who were paid as professionals for their work were at the tertiary level. Others performed interpreting services as National Service interpreters, student interns, and volunteers, none of whom were professionally trained. Although the sample size is small and the data Interpreting in Ghana  :  25

collection was limited to the Central Region, these preliminary results indicate that interpreters working in Ghana are confronted by numerous demands to which they apply a number of control options. The responses to the open-ended questions were coded for themes as demands or control options, and then specific demands and control options were identified. The Demand-Control Schema analysis is informed by the work of Dean and Pollard (2001, 2013). Table 2.1 shows the themes we identified in the coding process. The table provides examples of demands faced by Ghanaian interpreters. The table is accompanied by a narrative description of the findings. The demands faced by the Ghanaian interpreters included large classes with large numbers of deaf/Deaf students. For example, a class might have over 100 students, of whom 17 are deaf/Deaf or hard of hearing and rely upon the services of the signed language interpreter. The classes were also lengthy. One participant responded by saying: Especially when you are tired, and you still have to sit through a 3-hour lecture you feel when you are not giving your best because you are tired, your hands are aching and it becomes difficult to lift your arms, that is the challenging part that I face. There was a general lack of understanding of the interpreter role by hearing lecturers and by the Deaf and hearing students. Lecturers often made changes to the regularly scheduled courses without consulting with interpreters, so students would be left without services. Interpreters reported that aside from the occasional visit from interpreter educators from outside of Ghana, little professional development was available. Interpreters reported that full-time positions were scarce and payment for work was rare and minimal. Table 2.1. Themes: Demands. Demands Faced by Interpreters Large classes Large numbers of Deaf students in classes Long classes (3 hours) and long days (9 hours) Lack of understanding of interpreter role Unexpected changes in schedule Lack of training and professional development Lack of full-time interpreters Lack of remuneration for services 26 : Maroney, Fobi, Puhlman, and Buadee

Table 2.2 provides examples of control options employed by Ghanaian interpreters, including teaching and tutoring in and outside of class time and responding to student questions directly, rather than interpreting the question to the lecturer. Many of the interpreters had undergraduate degrees and experience interpreting the same class with the same lecturer. They were so familiar with the content area that they would stop interpreting and give direct instruction to the Deaf and hard of hearing students in the class. One interpreter responded by writing: I stop being an interpreter and become a teacher. The advantage I have is, I have learnt this, I have interpreted the course for about four years so and I have worked with two or three lecturers who taught the same course. Interpreters also obtained handouts from the lecturers and provided them to the students, and they located videos on the Internet or created their own videos to demonstrate what the lecturer was teaching in a class. They asked hearing students to take notes and to explain concepts to students so that the interpreter could continue interpreting. They asked other Deaf students to assist in clarifying information. When lecturers changed schedules with little or no notice, the interpreters prioritized assignments, making decisions about which assignment was more important to interpret. In the next section, Buadee discusses in more depth the demands the interpreters faced and the control options they used.

Table 2.2. Themes: Control Options. Controls Used by Interpreters Teach Deaf students after class Hearing students help to explain concepts to Deaf students Obtain handouts from lecturers Ask hearing students to take notes Prioritize assignments Ask Deaf students to clarify Tutor outside of class time Teach outside of class time Create videos of practical applications Find YouTube videos with demonstrations Interpreting in Ghana  :  27

A Day in the Life of a GSL Interpreter: Carolina Buadee

As an interpreter, Buadee has come to realize that the people for whom she has interpreted appreciate her interpreting work. In addition, those who approved her volunteer services and scheduled her to interpret appreciate her commitment. For Buadee, to volunteer as an interpreter, she must have completed her first degree and have had a desire to work for the department as an interpreter. She was not on any payroll but was, nonetheless, working for the department. She loves interpreting work and wants to interpret at the university as a full-time, paid, professional interpreter. The university was not employing people at that time. She decided that whether or not the university was employing interpreters, she would still interpret for the university in the hope that one day they would employ her as a full-time interpreter. One of the challenges that she faced was that the hours she interpreted were too many for her to handle in 1 day. As she experienced firsthand, the number of interpreters at the university was inadequate; therefore, the interpreters tended to work more hours than they should have. There are not many interpreters in Ghana, so the interpreters are often required to work alone for long periods of time. Any special demands or individual concerns that might need to be addressed with the Deaf students must be managed by the interpreters on their own because there are no assistants, notetakers, or team of interpreters. The interpreter calls on other students to assist by writing some words and notes. The interpreters also interpret while, at the same time, attempting to take notes. They go the extra mile by assisting the students outside of class, doing things such as explaining the notes that were taken in class. In addition, at times, there are words, especially content-specific jargon in areas such as information and communication technology or art education, that the Deaf students have not seen before. Deaf students need to understand the jargon, so she might tell a hearing student sitting near the Deaf student to write the word for her. When the lecture is finished, she can take the notes and give them to the Deaf students. These may be words that the lecturer has used for which there is no sign or the sign is unknown. Sometimes while in the class, Buadee and the students create a sign. Buadee may also teach content-related vocabulary when needed. While interpreting in a class, if the students do not understand a word or

28 : Maroney, Fobi, Puhlman, and Buadee

concept, the interpreter will take the time needed to explain the concept to the student. Other students who come to the university are hard of hearing but have not yet learned any signed language, so the interpreters teach them GSL in the evenings. The number of students Buadee interprets for varies. For example, one class may have 17 students enrolled who are deaf, whereas other classes may have as few as three students who are deaf. In total, some classes may have as many as 100 students, with a handful of those students being deaf. Buadee has also interpreted in classes as large as 200 students, where some people must stand outside of the classroom, with about 17 Deaf students in the front of the lecture hall. If the class is that large, she addresses the individual needs of Deaf students by using some of the other students, both hearing and deaf, as assistants. For example, if she is interpreting and one of the Deaf students indicates to her that he or she does not understand a particular word, Buadee will signal a nearby student who does understand to explain that word to the other student. Buadee is unable to take the time to explain the word herself because she feels the need to move on while the lecturer continues presenting. Buadee states that some of the lecturers are willing to meet and provide their lecture notes so that the interpreter can give them to the Deaf students, because notetakers are not provided. After the lecture is finished, the Deaf students for whom she interprets ask for the notes of their colleagues who are hearing. Sometimes, they find it difficult to read the handwriting, so taking the notes directly from the lecturer is more effective. Some lecturers are willing to share their notes, but others will say their notes are not complete and are unwilling to share them with the Deaf students. Some of the lecturers are quite supportive of the interpreting services in the classroom. Buadee has had instances where lecturers have asked her to teach them the signed language after class. The work in Ghana as an interpreter is fine. Buadee loves it and thinks more people need to become interpreters so that they may work together in interpreting teams. There are benefits to having two interpreters working together; for example, when interpreting for a class, when one interpreter is feeling fatigued, he or she may take a break and then come back, while the other interpreter continues to interpret without an interruption in meaning transfer.

Interpreting in Ghana  :  29

The Story from the Coordinator’s Desk: Daniel Fobi

Fobi has served as both a signed language interpreter and the signed language interpreter coordinator at the UEW. The UEW has, over the past 11 years, been enrolling Deaf students in different academic programs, and the university provides signed language interpreting services to them. Over the course of 2 academic years, there was a dramatic increase in Deaf students from 28 to 42. The Deaf students were taking their major courses in three departments (information and communication technology, graphic design, and special education) and their second subject area in six departments (mathematics; social studies; health, physical education, recreation, and sports; art education; home economics; and graphic design). The UEW had 10 interpreters, which included three full-time interpreters, two National Service persons, and five finalyear internship students. Working as a signed language interpreter at the UEW provides quite a challenging experience. Because the workload of interpreters had increased, over the course of a single day, an interpreter could interpret continuously for over nine credits (or 9 clock hours). What the team of interpreters did to contain the situation was to gather the timetables, or class schedules, of all the Deaf students (both major and minor). The timetables were arranged according to the number of working days (Monday to Friday) in order to ascertain the total number of courses in a day. The interpreters then divided the courses to be interpreted on a daily basis. At this point, an interpreter could have up to a total maximum of 9 hours (credits) and a minimum of 6 hours in a day to interpret. Sometimes the interpreting was done continuously without a break. Over the course of a week, an interpreter could interpret for 33 to 39 hours without a team. The situation worsened in the second semester, when the final-year, 400-level students were done with their internships and returned to the classroom to take courses after they returned to the university upon completing their internships. Although, the hearing signed language students voluntarily supported the team of interpreters when they were free, the total number of hours for the full-time and National Service interpreters increased in the second semester. During this time, there was a government embargo on employment, so the university was not allowed to recruit new staff (full-time) interpreters to support the prevailing need. In the 2015–2016 academic year, the number of Deaf students increased again to 51; however, the number of interpreters remained at 30 : Maroney, Fobi, Puhlman, and Buadee

10 (three full-time interpreters, two volunteers, two National Service persons, and three final-year internship students) in the first semester and increased to 18 in the second semester (three full-time interpreters, five volunteers, two National Service persons, and eight final-year internship students). Fortunately for the university, there was a visiting professor from Western Oregon University (WOU), who came to support the interpreters by organizing free seminars to guide the interpreters and also to introduce the interpreters to team interpreting. In this academic year, the efficiency of the interpreters improved, even though their numerical strength was not encouraging. In addition, the interpreters used the old strategy from the previous academic year of sharing their timetables, and the average number of hours interpreted in a week was 30 hours. Simultaneously, the three full-time interpreters were providing occasional guidance to orient the new interpreters who came on board. Every week, the UEW interpreters met the visiting professor to talk about their work in the previous week. In the 2016–2017 academic year, the number of Deaf students decreased to 49; however, the total number of courses remained almost the same as the previous semester. Fortunately, in the first semester, there was an increase in the total number of interpreters. This was because the visiting professor and her team had championed the cause in advocating for an increase in interpreters. The university employed five part-time interpreters, bringing the total number of interpreters to 19 (three fulltime interpreters, five part-time interpreters, one volunteer, one National Service person, and nine final-year internship students). In the same academic year, the visiting professor and her team organized a 2-week intensive workshop for the interpreting team, including teaching meaning transfer, team interpreting, Demand-Control Schema, and self-care. This training gave a new look to the interpreting work at UEW in the first semester of the 2016–2017 academic year as most of the skills learned at the workshop were put into practice. This made interpreting less stressful, and every interpreting session was done as a team (at least two interpreters were involved, with at least one with over 3 years of interpreting experience in the university). Team members switched every 20 minutes and also fed their partners with signs whenever the active interpreter had difficulty in signing concepts. Interpreters met from time to time to discuss their work, and whenever there was a challenge, the interpreters, together with their interpreter coordinator and the head of the special education department, met and found an Interpreting in Ghana  :  31

amicable solution to the challenge. Fobi’s experience at the UEW over the past 9 years as a signed language interpreter reveals that the first semester of the 2016–2017 academic year offered the best interpreting services provided to the Deaf students. The second semester of the 2016–2017 academic year came with its own challenges because all nine final-year students who helped to ease the workload of interpreting now returned to their respective classrooms as students. In addition, the 400-level Deaf students who went to internships during the fall semester returned to campus to take classes. Thus, the number of interpreters decreased while the number of Deaf students increased, resulting in a big blow to the team. Although the student interns (now final-year students in the classroom) offered to support the interpreting work, they could not exceed a maximum of 3 hours a week because they were students and needed to concentrate on their studies. In the second semester of the 2016–2017 academic year, two community-­based rehabilitation and disability studies (CBRDS) interns joined the team to support the interpreting work because they needed enough time and practice to be proficient in signed language interpreting. We paired the two new interpreters with two experienced interpreters, so that the interns could study under the supervision of the experienced interpreters. Although we were able to contain the prevailing condition by providing interpreters for the Deaf students, the routine of team interpreting for the first semester was put to a halt. The workloads for interpreters increased, and work was stressful. The interpreter coordinator and the interpreters met from time to time to talk about their work and discuss how the best practices of interpreting could be offered to their consumers. The coordinator worked hard with his team to recruit more interpreters from current 300-level students for the next semester. They enlisted approximately 20 300-level students who were then trained so that they could support the team in the subsequent semesters. We also hoped that the government embargo would be lifted so that more interpreters could be employed to support our work. The visiting professor from WOU returned with her team to Ghana in August 2017 to offer the interpreters more training in order to help improve the interpreters’ practices for working with the Deaf students. The UEW has plans for establishing undergraduate and graduate programs in interpreting.

32 : Maroney, Fobi, Puhlman, and Buadee

Discussion

Most interpreters reported volunteering as a part of their interpreting service. This volunteer work happens in a number of ways. Some interpreters volunteer to “get their foot in the door,” or they believe that by becoming recognized as professional interpreters they may become paid employees in the future. They recognize the need, and they see other interpreters who are paid for their work. Anticipating the possible opportunity to work as an interpreter in the future, they demonstrate a commitment to the Deaf consumers and the hiring institution by volunteering in a formal way. They are scheduled in the same way the staff interpreters are scheduled but usually with a lower number of hours per week. Other interpreters find themselves arriving at an event, such as an assembly at a tertiary institution or a church service, where Deaf people are present but interpreters are not provided, and they offer to provide access. On other occasions, interpreters will arrive at their own appointment at the hospital and find a Deaf person attempting to access medical services. The interpreters may offer to provide access if the Deaf person is open to their assistance. On still other occasions, Deaf people may ask interpreter friends to interpret for them. One of the ways that tertiary institutions meet the growing need for interpreters is by filling positions with student interns. The number of interns varies from year to year and semester to semester. Those students who are on the education of the hearing impaired (EHI) track of the special education degree program do the bulk of the interpreting internships during the fall semester. Those students on the CBRDS track, a 2-year diploma program, do their interpreting internships during the spring semester. There are fewer students on the CBRDS track. The challenge here is that the number of Deaf students does not decrease in the spring. In fact, the number may be higher because those Deaf students who were participating in their own internships off campus return for classes.

Future Directions

Development has begun on preservice interpreter education programs at both the undergraduate and graduate levels at the UEW. In-service

Interpreting in Ghana  :  33

opportunities have been provided by the Supreme Sign Institute, by Jehovah’s Witnesses, and in partnership between WOU and the University of Education, Winneba. Sustainability is a challenge. Interpreter education programming, whether preservice or in-service, is suffering from a lack of interpreter educators in Ghana.

Future Research

Directions for future research include data collection outside of the University of Education, Winneba, and the Central Region of Ghana. We would also like to collect the perspectives of the Ghanaian Deaf community, as well as garner the experiences of the hearing and Deaf consumers who use interpreting services. This study represents a glimpse into signed language interpreting in Ghana. Efforts continue to increase the quantity and quality of signed language interpreters and, ultimately, to improve the interpreting services for Deaf Ghanaians.

References

Adu, J. (2016). Social and academic experiences of students who are deaf at the University of Education, Winneba (Unpublished master’s thesis). University of Education, Winneba, Ghana. Data Productions Unit, Ghana Statistical Service. (2016). Populations projections summary, 2010–2016. Retrieved from http://www.statsghana.gov­ .gh/docfiles/2010phc/Projected%20population%20by%20sex%202010% 20-%202016.pdf Dean, R. K., & Pollard, R. Q. (2001). Application of Demand-Control Theory to sign language interpreting: Implications for stress and interpreter training. Journal of Deaf Studies and Deaf Education, 6(1), 1–14. Dean, R. K., & Pollard, R. Q. (2013). The Demand-Control Schema: Interpreting as a practice profession. North Charleston, SC: CreateSpace Independent Publishing Platform. Fobi, D.,  & Oppong, A. M. (2019). Communication approaches for educating deaf and hard of hearing (DHH) children in Ghana: Historical and contemporary issues. Deafness  & Education International, 21(4), 195–209. DOI: 10.1080/14643154.2018.1481594

34 : Maroney, Fobi, Puhlman, and Buadee

Ghana Statistical Service. (2012). 2010 population & housing census: Summary report of final results. Accra, Ghana: Ghana Statistical Service. Hadjah, T. M. (2016). Number marking in Ghanaian Sign Language (Unpublished master’s thesis). Department of Linguistics, University of Ghana, Legon, Ghana. Hairston, E., & Smith, L. (1983). Black and deaf in America: Are we that different. Dallas, TX: T.J. Publishers, Inc. Ilabor, E. (2010). Dr. Andrew Jackson Foster: The father of Deaf education in Africa. Ibadan, Nigeria: Optimistic Press. Mantey, K. A. (2011). Experiences of pupils with post lingual hearing impairment at the Unipra South Inclusive School, Winneba (Unpublished master’s dissertation). University of Education, Winneba, Ghana. Oppong, A. M.,  & Fobi, D. (2019). Deaf education in Ghana. In H. Knoors, M. Marschark, & M. Brons (Eds.), Deaf education beyond the Western world: Context, challenges and prospects for Agenda 2030. Oxford, United Kingdom: Oxford University Press. Oppong, A. M., Fobi, D., & Fobi, J. (2016). Deaf students’ perceptions about quality of sign language interpreting services. International Journal of Educational Leadership, 7(1), 63–72. Strauss, A. L., & Corbin, J. M. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory. Thousand Oaks, CA: Sage Publications. Trine, E. (2013). ‫ةمجرتم‬: A case study of an Arabic/Jordanian sign language (liu) interpreter in Jordan (master’s thesis). Western Oregon University, Monmouth, Oregon. Retrieved from https://digitalcommons.wou.edu/theses/10

Interpreting in Ghana  :  35

Chapter 3 The Role of French Deaf Translators, Case Study: The Paris Attacks, November 13, 2015 Aurélia Nana Gassa Gonga

Deaf interpreters are becoming more known and visible in the signed language interpreters’ community (Forestal, 2005; Mette Sommer, 2016; Stone, Walker, & Parsons, 2012). However, the recognition of their status as full professionals is still in progress in many countries. For example, in the United States, deaf interpreters have been able to gain training as interpreters since the beginning of the professionalization of signed language interpreters (Brück & Schaumberger, 2014). In Finland, deaf interpreters have been able to join the (hearing) sign language interpreters training program since 2001 (Mindess, 2016). However, in Denmark, deaf interpreters still have to fight for the right to be trained and then to be recognized as interpreters (Mindess, 2016). Furthermore, deaf interpreters in some countries can be part of their main national association of sign language interpreters (e.g., United Kingdom, Serbia, Portugal), whereas in other countries, they cannot (German-speaking and Frenchspeaking regions of Switzerland).1 Deaf interpreters in these latter areas may have separate associations or none at all. In France, only deaf translators (not to be confused with deaf interpreters) have been able to join the French Association of Sign Language Interpreters and Translators (AFILS)2 since 2009.

French Context

Throughout history, France has served as a model for the development, integration, and education of the deaf community. Many eminent deaf people, such as Jean Massieu and Ferdinand Berthier, have influenced the 36

deaf community abroad (see Cantin & Cantin, 2017). For example, Laurent Clerc, a French deaf teacher, inspired Thomas Hopkins Gallaudet in America to educate American deaf children and to found the first American deaf school in 1816 (Gicquel, 2011). In 1880, the participants in the Congress of Milan advocated the oral method (as opposed to the sign method), refusing to acknowledge the positive impact and benefits of a signed language (in this chapter, unspecified signed language will be referred to as SL). After this congress, the French government decided to follow its recommendations, and French Sign Language (LSF) was forbidden in all deaf schools for more than a century, from 1880 to 1991 (Encrevé, 2008, 2012). Although LSF was not fully recognized in 1991, it marked the beginning of the movement toward full recognition thanks to the Fabius amendment that states that deaf citizens must have the choice between oral and sign methods. Thankfully, in the 1970s, after a group of French deaf people visited Gallaudet University, many started to realize that better living conditions for the deaf community were possible.3 It was the beginning of what French sociologists call “the deaf awakening”4 (Kerbouc’h, 2012). The French deaf community reconnected with their history and started to fight for the recognition of their own language by the French government (Encrevé, 2004; L’Huillier, 2014). Meanwhile, hearing people, especially Codas,5 who were used to informally interpreting from and into LSF, realized that they could and should be paid for their practice of interpretation, even though they would have to change their professional conduct and be trained for it (Quipourt & Gache, 2003). For example, this included adopting a new posture whereby the interpreter was no longer involved in the communication; the interpreter was interpreting by giving his or her own opinion. This awareness of the need to be professional (move to a more objective interpretation, instead of offering opinions) was concomitant with the French deaf community’s desire to work with trustworthy professionals who would follow an ethical code appropriate to interpreters based on faithfulness, neutrality, and professional confidentiality. All these acts of political and civic engagement led to the creation of the first academic degree in LSF interpretation in 1993 (Bernard, Encrevé, & Jeggli, 2007; Encrevé, 2014) at Paris 3 University.6 Following the academic framework for spoken language interpreters, sign language interpreter training is at a master’s degree level. The French government finally recognized LSF as an official language of France in February 2005. The Role of French Deaf Translators  :  37

The Impact of the Recognition of LSF on the Deaf Community

Passed on February 11, 2005, the Law for Equal Rights and Opportunities, Participation, and Citizenship of Persons with Disabilities7 has changed many aspects of daily life for deaf people. It has influenced their access to education with the right to choose between the oral method and sign method. It has also influenced their access to employment (every company with more than 20 employees has an obligation to employ a minimum of 6% of disabled persons). Asking an interpreter to come and interpret a meeting is now fully or partially funded by the Association pour la Gestion du Fond d’Insertion Professionnelle des Personnes Handicapées (AGEFIPH).8 In daily life, the law has increased accessibility in general as a result of the development of remote interpretation for public services or private calls and the development of sign language interpretation on TV news.9 In addition, each deaf person receives a benefit from the government10 to cover personal compensation needs in the private sphere. For instance, deaf people can use the benefit to book a sign language interpreter for a personal appointment with a lawyer, a child’s teacher, and so on. This also applies to the university environment, where deaf people have better access to higher academic degrees thanks to the availability of sign language interpreters. There are still limits, though; deaf university students only get 200 hours of sign language interpretation per year, which does not cover a full year of classes. Therefore, even though the law has been in effect for more than 10 years, the fight for its full application is not yet over. There are less than 10 bilingual schools for deaf children in France, and one of them is about to close due to the supposed lack of need. In addition, even today, it is still not unusual to hear stories about deaf employees who attend meetings without any kind of accessibility. Finally, deaf people with a high degree of education and employment are still relatively rare, although their number is on the rise.

(Hearing) French Professional Sign Language Interpreters

Before 2005, the field of LSF interpretation was composed of only hearing interpreters. The sole association of LSF interpreters in France was 38 : Aurélia Nana Gassa Gonga

established in 1978. It has changed its name several times throughout the years, reflecting its evolution of the view of the profession. Initially, it was called the Association Nationale Française d’Interprètes pour Déficients Auditifs (French National Association of Interpreters for Hearing Impaired) and was responsible for evaluating the ability of hearing interpreters to communicate with deaf persons. Many hearing people took the exam in addition to one hard of hearing person (Bernard, Encrevé, & Jeggli, 2007). It was the very first step toward the training and professionalization of sign language interpreters. Later on, “hearing impaired” was removed from the name of the association to be replaced by “sign language interpretation,” putting the focus on the linguistic competencies of the interpreters instead of conveying an image of the interpreters as “helpers for disabled people.” The association chose to be positioned within the field of communication instead of disability or social work. Furthermore, during the early days of the association, deaf people who were not interpreters themselves could be a part of the association, but that changed in 1995.11 Since then, with the influence of the deaf community calling for professional sign language interpreters who have graduated with a master’s degree from a sign language interpreter program, only sign language interpreters with master’s degrees could join the association, which had changed its name to the French Association of Sign Language Interpreters.12 In 2009, the name changed one last time, in order to include the recent professionalized deaf translators who have graduated from the master’s level program in sign language translation. Its current name is the French Association of Sign Language Interpreters and Translators;13 however, the acronym, AFILS, remains the same. Currently, AFILS recognizes the five master’s programs that provide academic training14 of sign language interpreters or translators in France. Any sign language interpreter or translator who has graduated from one of these programs can join the association without passing any additional test; the training and the diploma are sufficient. However, in France, there are interfaces de communication (in English: communication interfaces) who work as sign language interpreters even if they do not have any training as interpreters and are not fully fluent in LSF. They are cheaper than professional sign language interpreters who have a degree, and some deaf and hearing customers tend to work with them, purposefully or not. One of the continuing challenges of AFILS is to alert people to use high-quality interpretation services offered by trained professionals. The Role of French Deaf Translators  :  39

Les Interprètes Sourds (Deaf Interpreters)

In France, some people who are deaf work part time as interpreters (deaf interpreters; in French: interprètes sourds). No one is known to work as a full-time interpreter, and this job is often done in addition to another main job. However, even if many of these interpreters have attended some workshops on interpretation (most organized by the European Forum of Sign Language Interpreters [efsli]),15 they are not yet recognized by AFILS. AFILS has not wanted to recognize these interpreters because they have not completed a full academic training program (Jacquy, 2008). In 2011, deaf interpreters founded their own separate association, the Association Sourds Interprètes.16 They work in international conference settings using International Sign and LSF. Deaf Intermediators

Along with the professionalization of hearing sign language interpreters and the ongoing professionalization of French deaf interpreters, deaf intermediators are also becoming professionalized. These should not be confused with deaf mediators. Deaf mediators usually work in cultural settings (e.g., museums), whereas intermediators work in health, social, or legal settings.17 The profession of intermediatior emerged in France and is now inspiring many countries in Latin America and in francophone Africa. Deaf intermediators have been professionalized since the beginning of 2000, due to a revealed need in health settings (Dagron, 2008; Dodier, 2013). More recently, their area of work has been extended to social and court settings. They work alongside (hearing) sign language interpreters. The deaf persons may not understand hearing interpreters or be understood by them because of a lack of LSF acquisition or as a result of foreign origin, intellectual disability, or other conditions that may affect their language use. Therefore, the task of intermediators is to reformulate the sign language interpreters’ LSF to adapt it to the level of knowledge and cultural or intellectual comprehension of the deaf people who need it. To some extent, they give a deaf “accent” to the SL interpretation. In addition, their presence tends to reassure the deaf consumers; their common background as deaf people gives a sort of guarantee to what is said. Unlike deaf sign language interpreters who stick to linguistic coping strategies (see Napier, 2002), interpretation strategies (see Gile, 2005), 40 : Aurélia Nana Gassa Gonga

and the AFILS ethical code, deaf intermediators can use any kind of strategy to communicate the message. They can use images, drawings, nonstandard or home signs, common cultural gestures understood by both hearing and deaf people, pantomime, and so on. Moreover, deaf intermediators can express their own opinion on the communication in order to advise, comment, or add information to make the communication flow better. They are impartial in that they do not give the advantage to one party or the other, but they are not neutral, unlike deaf sign language interpreters who never express their opinion on the communication they interpret, no matter what. In many situations, (hearing) sign language interpreters like to work with deaf intermediators because they can remain in their role of linguistic professionals working with languages. They do not have to take into account the possible deficiencies occurring in the background of the deaf person.18 Therefore, according to AFILS policy, deaf intermediators cannot join the association. AFILS does not consider them to be part of the interpretation and translation field, but instead considers them as cultural and social brokers. Moreover, intermediators do not share the same ethical code as interpreters and translators. In summary, to be part of AFILS, you must have graduated from a master’s program in sign language interpretation, regardless of your audiological status.

Deaf Translators

In France, training for deaf translators took place for the first time in 2005. To date, the University of Toulouse–Jean Jaures19 is still the only institution that offers an academic training opportunity. Deaf translators are trained alongside hearing sign language interpreters. They share some classes but also have some separate classes that are more specific to interpretation or translation. Sign language interpreters focus on interpreting from and into oral (face-to-face) languages, and deaf translators focus on translating written text into SL, specifically recorded SL, which can be considered as one the solutions to the written form of SLs (Gache, 2005; Garcia, 2005, 2010). Deaf translators have much less training in translation from SL to written text. This training was first established in collaboration with a recent (at the time) bilingual website (recorded LSF and written French), Websourd (closed in July 2015). The future deaf translators were part of a training-and-work program, sharing their time with The Role of French Deaf Translators  :  41

the university (theory) and the company Websourd (practice). At the initiative of Websourd, a deaf activist for the recognition of LSF, Jacques Sangla, set up a bilingual information website. As part of the website, he looked for people to translate the written news into LSF. He was in contact with two well-known hearing sign language interpreters and codas, Patrick Gache and Alain Bacci. These two interpreters were used to doing translation tasks; however, they did not feel it was their place. Influenced by translation studies (e.g., Séleskovitch & Lederer, 2014), they thought every translator should translate into his or her own mother tongue. This was motivated by linguistic reasons but also, most of all, cultural reasons. Indeed, even though LSF was the mother tongue of these two coda sign language interpreters, they were not deaf and culturally deaf.20 Having LSF as a mother tongue, they decided, was not sufficient. The deafness itself would add a sociolinguistic value that would define this sort of translation to deaf persons. This is the principle of “deaf-same”: “I am deaf, you are deaf, and so we are the same” (Friedner & Kusters, 2015, p. x). Therefore, they started to look for deaf people with LSF as their native or at least their natural language in order to train them to translate.

Deaf Interpreters in English-Speaking Literature

In the mainstream English literature, there seems to be a harmonization of what deaf interpreters are (Adam, Aro, Druetta, Dunne, & Klintberg, 2014; Boudreault, 2005; Collins & Walker, 2006; Stone, Walker, & Parsons, 2012). According to this literature, deaf interpreters are known (and trained, when available) to perform the following three tasks: • Intralingual interpretation: from one SL to another SL. For example, this type of interpretation often occurs during international conference settings. • Interlingual interpretation associated with cultural and social brokering: reformulation inside a single SL. For example, this type of interpretation often occurs in appointment settings (“liaison” interpretation). • Translation: from a written text into SL (recorded or not); for example, conference translating from a text prompt to SL. Deaf interpreters can also practice deaf-blind interpretation, but this can be included in interlingual or intralingual interpretation. Indeed, it 42 : Aurélia Nana Gassa Gonga

can involve reformulating a discourse from LSF to tactile LSF (intralingual) or interpreting another discourse from American Sign Language (ASL) to tactile LSF (interlingual). This formalization codifies the realities of daily life and how deaf people were (are?) accustomed to acting as language brokers (Adam, Carty,  & Stone, 2011; Boudreault, 2005; Forestal, 2005). This is the main path to the professionalization of deaf interpreters in the United States, Canada, and most European countries— except for France. As discussed earlier, in France, the three previously listed tasks correspond to three different professions. The first task corresponds to the role of intermediators. The second one corresponds to the role of deaf interpreters. And the third task corresponds to the role of deaf translators. These deaf translators, as perceived in France, are the subject of our research. On the one hand, the international context seems to codify the reality of the terrain, whereas on the other hand, France follows spoken language interpreters and translators and tries to separate the different translation activities of the broader role of deaf interpreter. Apart from business considerations (e.g., Is there enough work to be specialized?), the risk of one “super-deaf interpreter” (encompassing three translation activities) might be a lack of specialization and a corresponding decrease in expertise. Furthermore, the ethics are not the same across all three activities. One is neutral and does not modify the form of the discourse to be translated (French deaf translator or interpreter), whereas another is part of the interaction and can, or may even have to, modify and reformulate the discourse (French intermediator). In addition, although translation to and from a written form of language has a permanent aspect, interpretation from and to the oral form of languages is an ephemeral task. That point influences translation strategies (Rathmann, 2011). However, eminent deaf interpreters work for the existence of deaf interpreters without being colonized by spoken language perceptions, which necessarily means those held by hearing people. The word colonize is strong and reflects the deep divisions in the current debate in the field of deaf translation.

Case Study: The Paris Attacks, November 13, 2015

Before the professionalization of French deaf translators, hearing sign language interpreters were used to carrying out this kind of translation The Role of French Deaf Translators  :  43

task from written French to LSF, recorded or not (see Gache, 2005). With their professionalization, the translation work is now performed mostly by deaf translators, whereas hearing sign language interpreters focus on the scenarios where speech is involved.21 However, some (hearing) sign language interpreters continue to translate to LSF, and deaf translators can take exception to that, as one of them expressed during an interview we conducted in 2016: Why? It’s such a pity. Deaf people can do the translation, so why not let them do so. I don’t think we should be in a competitor framework with sign language interpreters as we shouldn’t do the same work. Deaf people can do it. .  .  . Hearing people have to do everything? I don’t think so.22 (Translated from LSF)

How Was the Deaf Community Updated? In the context of this recent professionalization, we want to explore the understanding of deaf translation concerning the events of the Paris attacks on November 13, 2015. From the moment the attacks happened, at approximately 10 p.m., many channels chose to interrupt their usual programs with the breaking news. Therefore, everyone was able to be informed about what was happening on a minute-by-minute basis—­ everyone except the signing deaf community. Indeed, no LSF interpretation was provided anywhere. Even the speech of the French president, François Hollande, was not interpreted into LSF, although it would have been easy to find an interpreter who was available and willing to serve. A deaf woman, Noémie Churlet,23 posted a video in LSF on her personal and private Facebook account to inform her deaf Facebook friends about the events late in the evening of that Friday, November 13. This encouraged two deaf reporters, Laurène Loctin and Pauline Stroesser, to create a public Facebook page24 on Saturday, November 14, at 12:54 a.m. They posted a video in which they briefly explained what was going on in France. Later, they posted on their page an explanation for why they decided to create the Facebook page. That’s it for tonight. Thanks to Noémie Churlet for posting the first video and indirectly motivating us to create this page. . . . Let’s stay united and informed! See you tomorrow.”25 (Saturday, November 14, 2015, at 1:46 a.m.) 44 : Aurélia Nana Gassa Gonga

Figure 3.1.  Facebook page entitled “The Paris Attacks, LSF live.” The Facebook page was titled “Fusillade à Paris en direct LSF”26 (see Figure 3.1). Initially, deaf reporters posted video coverage explaining the situation in LSF. Deaf translators and hearing sign language interpreters quickly joined the team. All of them worked for free. In addition to breaking news, there was an interpretation (via hearing interpreters) and translation (via deaf translators) of relevant political texts (further discussed later in this chapter). As we focus on the role of the deaf translators involved, a fundamental question arises regarding the working conditions of deaf translators and their translation strategies. Is deaf translation a civic engagement or a professional service or both? Methodological Protocol To answer this question, we built a methodological protocol in two steps. The first step was the analysis of the material from the Facebook page. We looked at the form and content of the posted translated videos. Then, we gained interest from the comments from the audience beneath those videos. The second step of our protocol involved semistructured interviews, organized in 2016. We met the two deaf translators involved in the project to learn more about their involvement and their position and to ask questions about their translation strategies. The two deaf translators (a man and a woman) are actively involved in AFILS or the National The Role of French Deaf Translators  :  45

Federation of the Deaf in France.27 They were both trained at the sole French university training deaf translators in Toulouse. Therefore, they approach translating in the same way. They use a tool called “schematization”28 to give relief to the text and to help them to translate it in a threedimensional modality. They draw the text using geometric forms, symbols, arrows, and so on. This schematization tool helps them to memorize the LSF translation of the text and support the recording part. Step by step, they memorize a part of the schematization (corresponding to a part of the text) and record it, to be the most idiomatic in LSF and detached from the text. Once one part is recorded, they go to the next part and so on. After reviewing at the Facebook material, we met the two translators to ask them questions related to our own observations. We wanted to know who made decisions and to understand their translation choices. Observations from the Facebook Page

Additional Political Texts Translated Reviewing the material on the Facebook page, we noticed that not only was breaking news translated into LSF, but also other additional political texts were translated as well. For instance, one text was titled “The Different Islamic Groups?”29 (posted on November 24, 2015). Another was titled “The Situation in Syria”30 (posted on November 26, 2015). We asked the deaf translators about who made the decisions regarding the translation of news stories and why, and their answer was clear. The deaf reporters made the decisions. For example, the deaf reporters decided to offer a translation of a text from Elsa Pochon entitled “The Situation in Syria” to give more general political context to the deaf community. The Paris attacks happened, and no LSF interpretation was provided in the mainstream media. How many other news items had not been interpreted over the past days, months, or years? The deaf reporters, as insiders of the deaf community, were well placed to understand this issue. They knew where a lack of context might be missed, and they naturally wanted to fill the gap. According to the interviews we conducted, the deaf reporters knew that to adequately inform the deaf community about the Paris attacks and avoid any misunderstandings or stereotypes, they needed to provide additional contextual elements in the form of other news stories that were topically related to the Paris attacks. To accomplish this, they did not stick to the current headlines of the news, but they looked for past articles to translate and give more context to the deaf 46 : Aurélia Nana Gassa Gonga

Figure 3.2.  The headline style for each translation video. community. To be sure, deaf translators, as insiders of the deaf community, also knew this gap existed, but they had not taken the responsibility to choose the material to be translated.

The Form of the Translation Video The deaf reporters wanted to be as transparent as possible to avoid any confusion about the fact that these posts were translations. Therefore, at the beginning of each video, text appears citing the title of the article, the author of the article, and the name of the deaf translator. Each translation video has the same style of presentation. The first screen shows the headline of the video (“headline of the text, written by xxx, translated by xxx”; see Figure 3.2), and then the signing translation starts. Then, during the video, different subheadlines (delimiting a paragraph in the written form) are written and not signed. At the end of a part of the text, the deaf translator stops, and the next subheadline appears on a full screen (see Figure 3.3). Then, the translator continues signing.

Reduced Use of Fingerspelling There are two ways to name persons in a SL. You can sign name the person if the deaf community has “baptized” the person and given him or her a name sign. Or, you can fingerspell the person’s name. For instance, the former French president, François Hollande, has a sign name that consists of two bent index fingers positioned on the side of the mouth. This refers to his mole. However, for the current president of Syria, Bashar The Role of French Deaf Translators  :  47

Figure 3.3.  One example of a subheadline of a translation video.

Figure 3.4.  An example of a deaf translator pointing at the name Bachar El-Assad, instead of fingerspelling it. al-Assad, no sign name exists in LSF or was known by the deaf translator when he was translating. Therefore, the translator decided to pause and point to the name of that person, which was written on the screen. He chose not to use fingerspelling (see Figure 3.4). The same strategy occurred for the names of the terrorist groups, such as the Alawite group and the Sunni group (see Figures 3.5 and 3.6). According to the interviews conducted with the translators, the main reason for the choice to refer to the written text instead of fingerspelling 48 : Aurélia Nana Gassa Gonga

Figure 3.5.  An example of a deaf translator pointing at the name Alaouite, instead of fingerspelling it.

Figure 3.6.  An example of a deaf translator pointing at the name Sunnite, instead of fingerspelling it. was the difficulty it was assumed that deaf people would have in understanding fingerspelling through a screen, especially when the name in question was an unknown one. Rather than the pedagogical objective being the primary issue, it was more a matter of the comfort of the reception of the translation by the deaf audience. Moreover, as one of the translators said during the interviews, this strategy has the added benefit of helping deaf people make a quicker link between what they might have seen written The Role of French Deaf Translators  :  49

before and what they saw at the moment because deaf people were more likely to have seen these names written before, rather than fingerspelled.

Extra Use of Images The same options for referring to people exist for naming countries. You can use a sign name to refer to a country, or you can fingerspell it. In the videos, even if the country had a sign name and the deaf translator signed it, the flag of the country with its name written below the flag appeared during the signing (see Figure 3.7 for the example of Algeria: sign name and flag). Alternatively, if the translator did not know the sign name of a country, he or she fingerspelled it, and the flag of the country with its written name below the flag appeared as well (see Figure 3.8 for the example of Mauritania: fingerspelling and flag). The reason for this choice was always part of the translation strategy, as one of the translators mentioned during the interviews. As deaf translators, they are simultaneously stakeholders (translators) and consumers (target audience) because they are part of the deaf community. In other words, they are translating for their own people in their own country and for themselves, because they do not have access to the news in their natural language, LSF. Therefore, they understand and address the capacity and the limitations of their own community through technology—a screen in this case. In this situation, as members of a visual community, the translators were aware of the need to see visual representations of the elements in the stories.

Figure 3.7.  An example of a deaf translator signing Algeria while the flag and the written name of this country appear on the screen. 50 : Aurélia Nana Gassa Gonga

Figure 3.8.  An example of a deaf translator fingerspelling Mauritania while the flag and the written name of this country appear on the screen.

Comments from Facebook Followers We went through the comments on the posted videos and noticed that many deaf people were confusing a deaf person signing as a translator with a deaf person they knew as an insider of their own community. For example, several people commented, “You’ve made a mistake,” directly addressing the deaf translator as if the translator was signing as him- or herself, expressing his or her own views. The administrators of the page did not answer, but the targeted deaf translator took the time to respond and explain his or her role. Many apologized and then understood the translator’s role. Some of the commenters did not know about the profession of deaf translator. If a deaf person was signing, it should have been on his or her own behalf. Translation/interpretation is often seen as a “hearing thing” because deaf people are faced in daily life with so many hearing interpreters. Because deaf people cannot hear, the most direct way to interpret between speech and sign (which is what most interpreters are asked to do) is to hire someone who can both hear and sign. In addition, in France, it is still uncommon to see deaf people working in an intellectual profession. Conclusion and Prospects

To conclude, we would like to open up the debate and look at the prospects for the future of deaf translation in France and abroad. There is no The Role of French Deaf Translators  :  51

doubt that deaf translation is part of the future and that the French government needs to be more aware of this. To begin, we outline three main roles of deaf translation. First, we note a role in news accessibility, but not merely passing on the news. Deaf translation provides accessibility in a way that seems to match the expectations of the deaf community in terms of understanding (i.e., in a comfortable, appropriate manner). Deaf translators, as insiders of the deaf community, know when there is a need to incorporate visual references or to minimize fingerspelling. Second, although deaf translators are now clearly positioned as professionals, their translation strategies, driven by their focus on translation, can have a pedagogical aspect in inspiring teachers working with deaf people. The translation provides a great example of communicating a message in a visual way, alternating between SL, fingerspelling, written text, and images. Third, deaf translation, especially in the media, tends to democratize SL, as many hearing people and other media31 relayed the Facebook page on the Paris attacks and its initiative. This makes the SL and deaf community more visible. Finally, deaf translation tends to be more known by the hearing sign language interpreters’ community than the deaf community, as many deaf people still associate translation with hearing people. An increase in the work of deaf translators would raise awareness in the deaf community as well. More than ever, deaf translation is a sociolinguistic task that tends to take into account the background and visual competencies of the target audience (i.e., the deaf community). Even if they are sometimes paid for their work, voluntary solicitations still happens because the French government does not always provide for full accessibility. The full recognition of French deaf translation is still a work in progress, and despite its professionalization, it is occurring through civic engagement.

Acknowledgments

I extend gratitude to all the French deaf translators who gladly took part in this research. Your work as translators is outstanding and valuable. I express deep appreciation to Tashi Bradford, Onno Crasborn, and Stephen Santiago for their fine support. 52 : Aurélia Nana Gassa Gonga

Notes

1. See the results of the survey of the Erasmus+ project, “Developing Deaf Interpreting in Europe” (Mette Sommer, 2016), retrieved from http://deafinterpreters.eu (accessed January 2019). 2. In French: “Association Française des Interprètes en Langue des Signes” (www.afils.fr; accessed January 2019). 3. However, they were somewhat puzzled by the Signed English often used at Gallaudet University at that time. 4. In French: “Le réveil sourd.” 5. Coda stands for (hearing) children of deaf adults. 6. In fact, in 1988, a private association named SERAC established the first sign language interpreters program. Later, that association became affiliated with Paris 8 University. See Encrevé (2014) for more information about the history of sign language interpreter training in France. 7. In French: “Loi pour l’égalité des droits et des chances, la participation et la citoyenneté des personnes handicapées.” 8. In English: Association for the Management of the Fund for the Professional Insertion of Disabled Persons. AGEFIPH is the association in charge of covering the fines of companies that did not apply the 6% quota of disabled employees. It manages the money covered to fully or partial reimburse companies that have expenses related to the employment of disabled persons. For example, for deaf persons, these can be costs related to installing a light alarm or a device for remote interpretation, providing sign language interpreters for meetings, and so on. 9. To read more about the impact of the 2005 law, see the following article: “Aspects Essentiels de la Loi du 11 Février 2005, Dite Loi pour L’égalité des Droits et des Chances, la Participation et la Citoyenneté des Personnes Handicapées” (2006). 10. This benefit is the Prestation de Compensation du Handicap (PCH; in English: Disability Compensation Benefit). Its amount is dependent on the evaluated loss of autonomy of the disabled person. In general, for deaf people, this benefit is around 350 euros. 11. Previously, two deaf persons were presidents of this association; see http:// www.afils.fr/historique/ (accessed January 2019). 12. In French: Association Française des Interprètes en Langue des Signes. 13. In French: Association Française des Interprètes et Traducteurs en Langue des Signes. 14. University of Paris 8, University of Paris 3, CETIM (Centre de Traduction, d’Interprétation et de Médiation Linguistique) in Toulouse, University of Lille 3, and University of Rouen.

The Role of French Deaf Translators  :  53

15. See http://efsli.org/ (accessed January 2019). 16. The contact e-mail for information on the Association Sourds Interprètes is [email protected]. 17. In France, there is academic training for intermediators but not for mediators. Mediators work in a cultural environment with the aim to provide access to the culture. They organize cultural activities in LSF, and adapt or create information support to and for the deaf community (e.g., videos in LSF). They interact directly with deaf people and speak on behalf of themselves. In France, the main employer of mediators is the Science and Industry Museum. 18. To nuance, LSF interpreters are aware of the deaf culture and trained to take it into account when they interpret. However, when a deaf person does not understand LSF, it is not their role to fill this gap. Deaf intermediators take on this role. 19. More well known by the acronym CETIM for Centre de Traduction, d’Interprétation et de Médiation Linguistique (in English: Center for Translation, Interpretation and Linguistic Mediation). 20. To read further about deaf culture and this principle of “deaf-same,” see Ladd (2003) and Friedner and Kusters (2015). 21. Except for the TV news. Some consider TV news to be oral (interpretation), and some to be written (translation). In France, only hearing sign language interpreters interpret the regular news. However, in Belgium and the United Kingdom, some deaf people perform this task (see De Meudler & Heyerick, 2013; Stone, 2005). 22. In French: “Pourquoi donc? C’est vraiment dommage. Les sourds peuvent traduire et pourquoi ne pas leur laisser ce travail? Je ne pense pas que nous devrions être en concurrence avec les interprètes en langue des signes comme nous ne faisons pas le même travail. Les sourds peuvent le faire. . . . Est-ce que les entendants doivent tout faire? Je ne pense pas.” 23. Noémie Churlet is a deaf actress and founder of a magazine dedicated to deaf culture and art, named Art’Pi. She also founded a new independent deaf media website, Media’Pi (http://www.media-pi.org; accessed January 2019). 24. See https://www.facebook.com/Fusillade-à-Paris-en-direct-LSF315769835260290/ (accessed January 2019; 8,303 followers). 25. Original version: “C’est fini pour cette nuit. Merci à Noémie Churlet pour avoir édité la première vidéo et nous avoir ainsi indirectement motivée pour créer cette page. . . . Restons soudés et informés! A demain.” 26. In English: “The Paris Attacks, LSF live.” 27. In French: Fédération Nationale des Sourds de France; see http://www.fnsf .org/ (accessed January 2019).

54 : Aurélia Nana Gassa Gonga

28. However, experienced deaf translators tend to use the schematization tool less often than inexperienced ones. 29. In French: Les Différents Groupes Islamistes. 30. In French: “Point sur la Syrie.” 31. “Wow! An article on the Facebook page in the Huffington Post! Thanks,” posted on Monday, November 16, 2017. In French: “Waouw!! Un article sur la page Facebook dans le Huffington Post! Merci.”

References

Adam, R., Aro, M., Druetta, J.-C., Dunne, S., & Klintberg, J. (2014). Deaf interpreters: An introduction. In R. Adam, C. Stone, S. D. Collins, & M. Metzger (Eds.), Deaf interpreters at work: International insights (pp. 1–18). Washington, DC: Gallaudet University Press. Adam, R., Carty, B., & Stone, C. (2011). Ghostwriting: Deaf translators within the Deaf community.  Babel Revue International de la Traduction, 57(4), 375–393. Aspects essentiels de la loi du 11 février 2005, dite loi pour l’égalité des droits et des chances, la participation et la citoyenneté des personnes handicapées. (2006). Reliance, 22(4), 81–85. Bernard, A., Encrevé, F., & Jeggli, F. (2007). L’interprétation en langue des signes. Paris, France: Presses Universitaires de France. Boudreault, P. (2005). Deaf interpreters. In T. Janzen (Ed.), Topics in signed language interpreting: Theory and practice (pp. 323–355). Philadelphia, PA: Benjamins. Brück, P., & Schaumberger, E. (2014). Deaf interpreters in Europe: A glimpse into the cradle of an emerging profession. The Interpreters’ Newsletter, 19, 87–107. Cantin, A., & Cantin, Y. (2017). Dictionnaire biographique des grands sourds en France: Les silencieux de France (1450–1920). Paris, France: Archives et Culture. Collins, J., & Walker, J. (2006). What is a deaf interpreter? In R. Locker McKee (Ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters. Worcester, South Africa, October 31–November 2, 2005 (pp. 79–89). Coleford, United Kingdom: Douglas MacLean Publishing. Dagron, J. (2008). Les silencieux. Paris, France: Presse Pluriel. De Meudler, M., & Heyerick, I. (2013). (Deaf) interpreters on television: Challenging power and responsibility. In L. Meurant, A. Sinte, M. Van Herreveghe, & M. Vermeerbergen (Eds.), Sign language research, uses and practices: Crossing views on theoretical and applied sign language linguistics. Berlin, Germany: De Gruyter Mouton. The Role of French Deaf Translators  :  55

Dodier, C. (2013). Le rôle des professionnels sourds. In C. Quérel (Ed.), Surdité et santé mentale: Communiquer au coeur du soin  (pp. 167–174). Cachan, France: Lavoisier. Encrevé, F. (2004). L’évolution de l’interprétation en langue des signes française du milieu du XVIIIème siècle à nos jours. Surdités, 5–6, 121–135. Encrevé, F. (2008). Réflexions sur le congrès de Milan et ses conséquences sur la langue des signes française à la fin du XIXème siècle. Le Mouvement Social, 2(223), 83–98. Encrevé, F. (2012). Les sourds dans la société française au XIXème siècle. Idée de progrès et langue des signes. Grane, France: Créaphis. Encrevé, F. (2014). Les spécificités historiques des formations d’interprètes LSF/ français en France. Double Sens, Revue de l’AFILS, 2, 7–18. Forestal, E. (2005). The emerging professionals: Deaf interpreters and their views and experiences on training. In M. Marschark, R. Peterson, & E. A. Winston (Eds.), Sign language interpreting and interpreter education: Directions for research and practice (pp. 235–258). New York, NY: Oxford University Press. Friedner, M., & Kusters, A. (2015). It’s a small world: International deaf spaces and encounters. Washington, DC: Gallaudet University Press. Gache, P. (2005). La traduction français écrit-langue des signes-vidéo (Unpublished master’s thesis). Lille 3 University, Lille, France. Garcia, B. (2005). Rapport du projet « LSF: Quelles conditions pour quelles formes graphiques ?». Paris, France: Ministère de la Culture et de la Communication. Garcia, B. (2010). Sourds, surdité, langue des signes et épistémologie des sciences du langage. Problématique de la scripturisation et modélisation des bas niveaux en langue des signes française (LSF) (Unpublished higher degree research thesis). Paris 8 University, Paris, France. Gicquel, P. (2011). Il était une fois les sourds Français. Paris, France: Books on Demand Edition. Gile, D. (2005). La traduction, la comprendre, l’apprendre. Paris, France: Presses Universitaires de France. Jacquy, F. (2008). Interview de Sandra Recollon, présidente de sourds interprètes. Journal de l’AFILS, 66, 8–9. Kerbouc’h, S. (2012). Mouvement sourd (1970–2006). Paris, France: L’Harmattan. Ladd, P. (2003). Understanding deaf culture: In search of deafhood. Bristol, United Kingdom: Multilingual Matters. L’Huillier, M.-T. (2014). Les représentations des sourds vis-à-vis des interprètes hier et aujourd’hui. Double Sens, Revue de l’AFILS, 2, 61–72. Mette Sommer, L. (Ed.). (2016). Deaf interpreters in Europe: A comprehensive European survey of the situation of deaf Interpreters today. Copenhagen, Denmark: Gobierno de Dinamarca.

56 : Aurélia Nana Gassa Gonga

Mindess, A. (2016). Deaf interpreters in Denmark and Finland: An illuminating contrast. Street Leverage. Retrieved from http://www.streetleverage. com/2016/01/deaf-interpreters-in-denmark-and-finland-an-illuminatingcontrast Napier, J. (2002). Sign language interpreting. Linguistic coping strategies. Coleford, United Kingdom: Forest Books. Quipourt, C., & Gache, P. (2003). Interpréter en langue des signes: Un acte militant? Langue Française, 137, 105–113. Rathmann, C. (2011, September 17–18). From text into sign in different discourse modes: Information reception, processing and production. Paper presented at the efsli conference, Vietri sul Mare, Italy. Séleskovitch, D., & Lederer, M. (2014). Interpréter pour traduire (5th ed.). Paris, France: Les Belles Lettres. Stone, C. (2005). Deaf translators on television: Reconstructing the notion of interpreter. In N. Meer, S. Weaver, J. Friel, & K. Lister (Eds.), Connections 4 (pp. 65–79). Bristol, United Kingdom: Bristol University. Stone, C., Walker, J., & Parsons, P. (2012). Professional recognition for deaf interpreters in the UK. In J. Dickinson & C. Stone (Eds.), Developing the interpreter, developing the profession: Proceedings of the ASLI conference 2010 (pp. 55–63). Coleford, United Kingdom: Douglas McLean Publishing.

The Role of French Deaf Translators  :  57

Chapter 4 Use of Haptic Signals in Interaction With Deaf-Blind Persons Eli Raanes

A system of haptic signals, based on conventionally understood motions, has been developed in the Scandinavian countries since the late 1990s as a tactile approach to communication. Today, haptic signals have become a part of the communicative repertoire in deaf-blind communities. Haptic signals are produced on a deaf-blind person’s body to provide contextualizing information about the environment where the interaction is taking place. The signals also work to convey information about interlocutors and other participants’ actions and nonverbal expressions. Basic signals include ones for turn taking, emotional expressions, and minimal response. Such information is essential to maintain and take part in dialogues and interactions (Linell, 2009). A vital function of haptic signals in an interpreter-mediated setting is to make concurrent information available to deaf-blind persons in order to frame and empower their interaction (Lahtinen, Palmer, & Lahtinen, 2010; Raanes & Berge, 2017). In this study, we focus on the use of haptic signals in interpreter-­ mediated meetings. The analyzed material provides insight into how haptic signals support the interlocutors’ interaction and how the interpreters’ use of such signals facilitates the ongoing communication. The system’s background will be discussed, along with certain principles underlying the signals’ form and function. Referring to concrete examples, our research questions will be related to how haptic signals are used in interpreting meetings, how the signals are organized, and how interactional space is reconfigured through embodied haptic signals. The result of this study demonstrates in particular how interpreters deliberately use different kinds of haptic signals to alternate their actions between mediating spoken utterances and describing the context. Our findings indicate that interpreters’ actions are based on a context-specific, 58

moment-by-moment evaluation of the participant framework in which all the participants, including the interpreters, operate.

Interpreting for Deaf-Blind People

Dual sensory loss affects the way people obtain information about what is being said and done around them. Interpreters for deaf-blind people provide access to both spoken/signed utterances in a communicative setting and information about the surroundings and what is going on (Berge & Raanes, 2013). The World Association of Sign Language Interpreters defines interpreting for deaf-blind as “the provision via an intermediary of visual and/or auditory information, which occurs through offering three, fully integrated elements: the interpreting of spoken or signed language, environmental description, and physical guiding” (2013, p. 2). Haptic signals are today incorporated as techniques in performing interpretation for deaf-blind people in regard to all three of these elements. Haptic signals are taught as a part of several interpreter courses on interpreting for deaf-blind persons and in rehabilitation programs for deafblind people (Berge & Raanes, 2013; Erlenkamp et al., 2011).

Access to Environmental Description

Vision enables people to acquire a vast amount of information. When sighted people enter a room, a single gaze provides a great deal of information about the environment and the people there. For those trying to describe this overwhelming input of information, the question is where to start and how to form an environmental description. Interpreting for deaf-blind people requires knowledge of techniques on how to do so, based on an understanding of what is essential for the actual situation and the purpose for the attending participants. It is crucially important to know when to provide an overview of the situation and when more extended details are required. The task of conveying environmental information is something other than interpreting spoken or signed utterances. Interpreting a spoken/signed text is when the interpreter is working on what other people are communicating. Giving access to the environment entails that the interpreter is the one choosing the information to be conveyed and how to convey it based on an understanding of what is Haptic Signals in Interaction With Deaf-Blind Persons  :  59

essential for the actual situation and the purpose for the attending participants. This demands involvement of interpreters in the situation by formulating their own statements, finding the right terms, and organizing the information—a demanding task based on their professional skills and a reflection on the choices at hand. A restrictive factor when describing the environment may be limitations of time in regard to producing and expressing the needed information, whether it be an interpretation or an environmental description. To take part in interaction requires a good deal of knowledge and information about others’ responses and what is going on in the communicative setting: without access to who is talking to whom, it is hard to know how the interpreted utterance may be understood and responded to; without knowing how others react to your own message, it is hard to continue to take part in a discussion. Environmental descriptions are therefore essential for deaf-blind people’s participation in interaction with others.

Haptic Signals

Deaf-blind people have been involved in developing haptic signals as a method for environmental description (Lahtinen, Palmer, & Ojalac, 2012; Næss, 2006; Nielsen, 2012), and deaf-blind instructors have been central in expanding and developing the process both in the Nordic countries and elsewhere. In the United States, the focus on tactile sign language and environmental description for deaf-blind people has been called “the tactile movement” (Edwards, 2014), which is a general approach to helping deaf-blind people become involved and independent in their physical orientation toward activities. The specific method of using haptic signals has been developed since the late 1990s. Communication through touch is a natural way to interact with deafblind persons, and the sense of touch is the foundation of national tactile sign languages as well as of systems of haptic signals (Mesch, 2000; Mesch, Raanes, & Ferrara, 2015; Raanes, 2006). Techniques of communicating using tactile information and tactile sense similar to the use of haptic signals have been part of deaf-blind communication as long as there has been a tradition of meeting the deaf-blind communities’ needs. For instance, in 1864, an early Nordic journal for the deaf and mute community published a series that included instructions on how to 60 : Eli Raanes

communicate with the deaf-blind, using, for example, tactile signs for “yes” and “no” in the same manner as described in today’s handbooks of haptic signals (Bjørge, Rehder, & Øverås, 2015, pp. 132–133; Keller, 1864, pp. 61–67). Today, the system of haptic signals as part of the communication repertoire of deaf-blind people is increasing internationally, with the deaf-blind community taking an active part in this process.

An Interactional Approach to Communication

Dialogical approaches to interpreting studies focus on the activities that take place between interlocutors. Studying language in a communicative setting involves more than the language being used and also includes the organization and structure of communicative activities. Since the main goal of haptic signals is to enhance the interaction between interlocutors, interaction analyses are included in this study in order to reveal structures in sequences of what is said and done in communicative processes (Berge & Thomassen, 2016; Linell, 2009).

Method

The basic techniques used in ethnographic videography are useful for following the sequence of a given interaction (Knoblauch, Schnettler, & Raab, 2006). Material from two case studies has been used for this study, both culled from authentic, formal interactions in organizational or educational work and both in the form of video recordings, along with transcripts from interviews with deaf-blind participants and interpreters in the filmed situations. Dataset A is an educational situation where a deaf-blind instructor teaches students in a university class on topics related to communication for deaf-blind persons. In this situation, haptic signals are performed by the instructor’s interpreter, both when demonstrating the system and as a technique for conveying environmental information during the assignment. Dataset B is an interpreter-mediated meeting in an organization for deaf-blind people, where five deaf-blind board members are gathered and where their discussions require interpreters. Using a variety of communication methods, several interpreters take part in the situation, with one interpreter for every deaf-blind person and with those participants who Haptic Signals in Interaction With Deaf-Blind Persons  :  61

prefer tactile sign language having two interpreters in order to maintain the translation process. Our data describe and illustrate sequences of interaction in order to demonstrate step by step the organization of actions between the interlocutors during the interpreter-mediated conversation. In the examples, the original conversations from Norwegian into Norwegian Sign Language have been translated into English. The transcripts focus on the haptic signals, following both the production of the signals and how the signals refer to an environmental or interactional response or reaction.

Analyzed Functions of Signals

In our data from a formal meeting, haptic signals were related to four basic communication functions: (1) description of the environment, (2) description of other persons and their actions, (3) strategies for establishing attention and a common arena, and (4) strategies for mediating feedback signals. Description of the Environment Knowing who is present is one of the basic parts of accessing the communication. Providing information about those present and about who is addressing whom is essential information in order to be able to interact with others. In datasets A and B, there are repeated examples of how the interpreters use various haptic signals to provide such information both at the beginning of and during a meeting. A common technique is to use the deaf-blind person’s back to illustrate the room ahead of the person. If the interpreter sketches a square on the deaf-blind person’s back, this may be a representation of a room. In Figure 4.1 (from dataset A), we see how the interpreter uses both of her hands and a steady movement to present the form of the room, starting by drawing the lines in the sketch on the upper part of the deaf-blind woman, Lea’s, back and ending at the lower part. When this signal is given, the interpreter continues by using two fingers to make a mark (at the dot in Figure 4.1) indicating where Lea is located in the room. Sketching the room and clarifying Lea’s place in it provide an important starting point for further orientation. In our two datasets, interpreters frequently referred to other people and their actions by pointing on 62 : Eli Raanes

Figure 4.1.  Sketching the room and Lea’s own position.

the deaf-blind person’s back and by using the sides of a sketched room to refer to others’ location. This is an effective tool in the process of referring to space and to locate what is going on in the deaf-blind person’s environment. Figure 4.1 demonstrates such an important starting point for representing the information about who is present and to show the direction of where others are in relation to the deaf-blind person. In most situations, the signal for the space as a room indicates the room in front of the person, and a standard position for the signal the deaf-blind person’s own position is located at the lower part and in the middle of the person’s back. When this basic information is established, the introduced forms of information may be developed further and elaborated on by presenting further haptic signals, including additional spoken and signed utterances. In dataset B, a meeting takes place with a group of five members of a board seated by a table. The interpreters use a similar square sketch as in Figure 4.1 to illustrate the table the board is seated by. The board chair is seated at the end of the table, so from her position, she has the whole table in front of her. Her position and that of the others around the table is made clear by her interpreter making a similar signal as in Figure 4.1, with the board chair’s position marked on the middle and lower parts of her back. In this example, the haptic signal—the square form on the deafblind person’s back—refers not to the room but to the table. This demonstrates that the same haptic signals are contingent upon the situation and Haptic Signals in Interaction With Deaf-Blind Persons  :  63

may refer to various contextualized meanings. If introduced with additional information, a quadrangle may even represent a small, squaresized instrument or the orientation in a braille cell, and additional haptic signals may provide additional information about its format. Description of Other Persons and Their Actions In dialogues, our communication is influenced by the response from others and our interaction with them (Linell, 1998, 2009). This is a natural part of all communication and will here be demonstrated with an example from the material in dataset A. When one of the students steps forward to introduce herself to the deaf-blind instructor, the access to this action is provided as illustrated in Figure 4.2. When the student steps forward, the interpreter describes this action with a haptic signal. The signal has the meaning person walking, where the signal traces a path on the deaf-blind instructor’s back. The signal provides additional information concerning the direction in which the student is walking—from a space in the back of the room coming toward you at your left side. The haptic signals trace a path from high up on the shoulder and back and toward the set position of the instructor’s location (you—located on the back’s lower/middle part). When the student arrives in front of the instructor, an additional haptic signal is given—in this direction [to your left]. The instructor, knowing that the student is addressing her, orients her body in the direction of the student and raises her hand, ready to reach the student’s outstretched hand. Even if the instructor is blind, there is no delay or hesitation in this start of their interaction and in their greeting. The sequence of interaction is performed in a moment-by-moment awareness of each other in this interpreter-­ mediated sequence. We see that the interpreter herself is also smiling, keeping her eyes on what is performed in front of her (Figure 4.2). The haptic signal for walking is indicated by the interpreter’s two fingers walking step by step, outlining a path along the deaf-blind person’s shoulder and back. The deaf-blind person’s back has the function of the articulation space for this signal, where the interpreter’s movement of her two fingers “walking” is performed as the articulation of an iconic sign for walk, a familiar movement by hands and fingers in various signs and gestures (Taub, 2001, p. 22). Several of the haptic signals for action do resemble some signs found in the Deaf communities. The signals are adjusted to the context, as done here, demonstrating the direction the student is 64 : Eli Raanes

Fig. 4.2a. h*walking The signal traces a path referring to another’s movement: “From the back of the room, someone is coming toward you on your left side.” Figure 4.2b. h*direction The signal refers to the direction of the entering student, viz. the left side/ in front of you.

A student steps forward to greet Lea, who is made aware of her approach

Lea orients her body slightly forward/to the left. She raises her hand toward the student and greets her by shaking hands.

Figure 4.2.  Someone is approaching from the left. Note. h* = haptic signal. walking toward the instructor. Building on elements from a similar use of sign or gesture showing the direction, the haptic signal direction (Figure 4.2b) is indicated on the deaf-blind person’s back. The interpreter uses her hand in a specific articulation, and some of the elements from sign/gesture for the meaning pointing out the direction may be seen included in the haptic signal’s orientation, hand form, and movement.

Strategies for Establishing Attention and a Common Arena Deaf-blind persons may have difficulties in taking turns in these processes due to reduced hearing and sight. A critical moment in the interaction concerns how to get access to the conversation when you want to comment in the ongoing talk. One way of giving notice that you want a turn in a formal meeting is by raising your hand. Giving this signal does not mean that the floor is yours immediately; rather, it is when your turn is recognized by the others that you may be given your turn. From both datasets, we have examples of how this process is supported in a formal discussion by the interpreters using haptic signals. This is a process of interaction that involves several steps of negotiation where haptic signals are brought in. In dataset A, we see that the first step in this process starts when Lea (the deaf-blind women) raises her hand to express that she wants to contribute to the discussion. Her task here is to get the moderator’s attention with her request for a turn. Her interpreter has previously presented the placement of the chairperson in the meeting, so Lea orients her face and body slightly in the direction of the moderator. A reaction from that direction comes almost immediately, and the interpreter gives a Haptic Signals in Interaction With Deaf-Blind Persons  :  65

haptic signal for minimal response (a light tap on Lea’s right shoulder), before the next step in this interaction follows immediately. In Figure 4.3, we see the haptic signals in the first two pictures (Figures 4.3a and 4.3b). The signals convey the information that the moderator is looking in Lea’s direction and thus recognizes her turn-taking request. The haptic signals may be translated as looking toward you—done with two fingers (indicating the sight in your direction) drawn toward the position marked on Lea’s back representing your position. Figure 4.3 shows the starting and end points of the haptic signal gazing in your direction/looking toward you (Figures 4.3a and 4.3b). The haptic signal is produced almost simultaneously as the moderator indicates she has recognized Lea, so that there is almost no time delay in this interpretation process. Knowing that the moderator has seen her request, Lea has no need to keep her hand up, so she lowers it. The person who currently has the turn continues and ends his turn. Lea keeps on following the ongoing discussion until the moderator turns her attention toward her and announces that it is Lea’s turn. The interpreter immediately makes haptic signals to indicate this, with her right hand producing a haptic signal for response (distinct taps slightly to the right on Lea’s shoulder, indicating the placement and the reactions of the moderator). The meaning of this response in this context indicates that “Yes, the turn is yours” (Figure 4.3c). Lea knows

Interpreter’s action

Lea

Meaning/Interaction

Fig. 4.3a.

Holds her hand up

“The moderator is looking in my direction” [start of the signal].

Starts lowering her hand

“The moderator is gazing in my direction” [end of signal].

Right hand: h*looking in your direction Left hand: h*your position

Fig. 4.3b. Right hand: h*looking in your direction [the start of the movement with the right hand drawn in a trajectory toward the left hand.] Left hand: h*you

“I was recognized by the moderator.”

(The other person ends their turn.) Fig. 4.3c. Right hand: h*you/minimal response [the signal consists of three rapid taps on the right side of Lea’s back/ shoulder.]

Figure 4.3.  Moderator sees you. 66 : Eli Raanes

Lea starts her turn.

Response from the direction of the moderator.

she has the moderator’s and the participants’ attention, so she orients her body slightly in the direction of the moderator and starts her turn. The haptic signals are closely linked to the ongoing interaction. In this sequence, we see that the use of haptic signals is contingent upon and modified by the interaction in order to present this interaction in regard to who is addressing whom and how they are addressing them. The interpreted signals provide a few but important signals referring to the ongoing interaction, where the description takes place nearly concurrently with the unfolding action. The signals provide key information without taking the focus away from (or being in conflict with) the access to the ongoing discussion. The collaboration between Lea and her interpreter is firmly established. In her interview, the interpreter said that if she felt Lea hesitate or turn her head slightly toward her, she could repeat the signal or make a clarification. They were both of the opinion that this established collaboration made a difference. The interpreter explained in the interview that if, for example, in the context of a request for turn taking, a feedback signal for yes/your turn (as in Figure 4.3c) was not unambiguously understood as meaning your turn, she as the interpreter could add the haptic signal the moderator is looking in your direction (as in Figures 4.3a and 4.3b) and again repeat the feedback signal on the deaf-blind person’s shoulder (as in Figure 4.3c). This is done while adding a description in the spoken/signed interpretation as well. The haptic signals may sometimes work by themselves or in connection with other communication techniques. In her interview, Lea commented how information about minimal response and environmental description of the activity empowered her to participate and made her feel more strongly that she was an active partner in the discussions. As someone who was trained to use the signals, she felt they were easy to perceive and helped her participate with less effort. Lea referred to how the consequences of dual sensory loss were demanding and required more energy to take part in discussions. When given information through haptic signals, she felt enabled to focus more on the discussion, knowing she had information and was able to react more effectively in interaction with others. Lea maintained that “haptic communication makes me feel like a fellow human being when communicating with others,” because it enabled her to react to other people’s actions and responses. Apprehending whether the person in front of her was happy or sad allowed her to be a person who could support and console Haptic Signals in Interaction With Deaf-Blind Persons  :  67

others. Lea also commented that collaboration with her interpreters was crucial, because sometimes signals had to be repeated or framed with more information to run smoothly. In situations where she was in charge of a formal meeting, haptic signals were instrumental in enabling her to organize the meeting and steer the discussion. These signals helped her ascertain that all the vital information about the others was provided to her in regard to the ongoing action, signals for turn taking, minimal response, and emotional expressions. Strategies for Mediating Feedback Signals Responses in a meeting may be related to your own utterances or to the input of others in the discussion. From dataset A, three different signals refer to a situation where the deaf-blind instructor commented on something to a student in the group. The student was standing right in front of the instructor and gave a response by nodding and smiling to what the instructor was demonstrating. The interpreter’s haptic signals followed immediately, mirroring the student’s nodding and response. The first response signals were produced with small claps on the deaf-blind instructor’s shoulder (Figure 4.4). Haptic response signals are by far the most frequent signals in both datasets. Sometimes such tapping signals as seen in Figure  4.4 express minimal response; other times, they express a clearer answer or yes. The signals vary in regard to intensity (small taps, more intense responses, slow tapping, or rapid tapping) and duration (one signal or a series of several small taps), and response signals given on various positions of the back represent the location of different participants. Moreover, sometimes the response signals are produced on the deaf-blind person’s arm or hand or knee, if this seems more convenient (e.g., if the interpreter and the deaf-blind person are sitting beside each other). The response signal changes in relation to what happens in the ongoing interaction between the deaf-blind instructor and the students. The interpreter has her focus on the involvement between those interactions, and on a moment-to-moment basis, she contributes with haptic signals carried out with a minimum of processing time. Signals mediated near simultaneously make it easier for the deaf-blind person to take part in the interaction. The next two examples demonstrate how the student’s responses of smile and laughter are interpreted with haptic signals (Figures 4.5 and 4.6). 68 : Eli Raanes

Figure 4.4.  Haptic signal yes.

Figure 4.5.  Haptic signal laughter. While the student is laughing, the interpreter makes a haptic signal where one hand’s fingers are “playing,” that is, making small, rapid touches on the deaf-blind instructor’s back. If others in the group had joined in and shared the laughter, the interpreter would have used both of her hands to produce signals of “laughter all over.” Haptic signals provide information about emotional response and about the action when it Haptic Signals in Interaction With Deaf-Blind Persons  :  69

happens. For the instructor, this is a key response in building her understanding of the meeting’s atmosphere and of the group’s mood. Minimal response signals may change quickly. A third signal is brought in to illustrate the adjusted response when the student’s laughter ends with a smile. The haptic signal smile is made by drawing a line formed as a smile on the deaf-blind person’s back, as illustrated in Figure 4.6. As seen in the figure, the interpreter provides the haptic signal to “play along” with the situation, and the smiling response from a student is indicated by environmental description given by a haptic signal, even as the interpreter facing the student also smiles. Doing so, she takes the same position as when two parties meet. Wadensjö (1998) and LlewellynJones and Lee (2013, p. 58) both contend that such involvement—that is, being aware of and responding to the emotional expressions of others—is vital and enhances the ongoing interaction process. In dataset B, a group of deaf-blind persons were in charge of a formal meeting by using an interpreter service. Throughout the entire session, we find examples of haptic signals supporting the four basic communicative and interactional functions (i.e., description of the environment, description of other persons and their actions, strategies for establishing attention and a common arena, and strategies for mediating feedback signals). But the degree to which the individual deaf-blind participants preferred to use haptic signals varied from those wanting their interpreters to use it a good deal to those who just wanted an entirely minimal use of environmental information provided by touch. In her interview, the board chair noted how the haptic signals help her gain a thorough overview and a

Figure 4.6.  Haptic signal smile. 70 : Eli Raanes

feeling of having control over what was going on. Being in charge of discussions due to a meeting’s schedule, she actively built on information from haptic signals to include the entire group in the decision-making process. At the beginning of the meeting in dataset B, the chair’s interpreter was given information about the position of the others around the table (Figure 4.7; similar to the signal in Figure 4.1), where the positions of the others were pointed out using the sides of the squared format. Later in the meeting, this information provided a base for further facilitating the interpretation process. Figure  4.7 shows the proximity between the board chair and her interpreter—­here using a combination of clearly speaking into the microphone together with the haptic signals for direction—referring to the participant sitting ahead of her and to the right side who has a comment (similar to Figure 4.2b). The haptic signal is made in combination with a clearly spoken interpretation toward the board chair’s hearing aid and the microphone’s wire loop. This and other haptic signals refer to the location of the activities around the table, giving the chair access to all reactions and actions by the board members during the meetings. The activity by the table is constantly oriented by using the initially signaled “map of the persons around the table” and further information about activities going on concerning the other board members. All this refers to the established space, where all the actions carried out in front of the board chair are described with signals articulated on her back.

Figure 4.7.  Proximity when doing the haptic signal. Haptic Signals in Interaction With Deaf-Blind Persons  :  71

Discussion

This chapter discusses only a few examples of haptic signals. In handbooks produced for deaf-blind communities and that focus on environmental information, nearly a hundred signals are introduced as a basic repertoire (Lahtinen, 2008; Næss, 2006; Nielsen, 2012). Our two datasets from formal discussions reveal that the most frequent haptic signals are feedback and response signals closely related to who is expressing something to whom and with what response. The signals support access to the conversation between the deaf-blind person and the others involved in the interaction. The haptic signals may be produced as a single signal or a series of signals organized to closely follow a sequence of interactional actions. Our data are in accordance with the results of Skåren’s study (2011) on haptic signals. She found that deaf-blind users reported benefitting from the signals’ potential to support the understanding of their environments, helping them to orient themselves and take part on a more equal footing. The signals were effective and gave discreet access to information needed to maintain social competence in interactional settings. The production of the haptic signals may function as standard versions of the selected haptic signal or as adjustments to contextual usage. There may be variations and adjustment due to the context and personal preferences. An example of this is seen in Figure  4.1, where the interpreter draws the lines in the signal for room with two fingers: the stan­ dard signal is formed with just one finger, but the deaf-blind instructor in this dataset found it easier to perceive signals made with a broader mark. The context and provided information in the interpreter-mediated situation serve as the basis for meaning construction. A square shape, as in Figure  4.1, may be used to establish information when referring to several concepts that have a similar shape: for example, the signal refers to a room in dataset A and to a table in dataset B and may in other settings be the starting point for describing patterns in a painting, a technical instrument, and so forth. If additional contextual information is provided, the entities the signal refers to may range in size from the gigantic to the minute, such as a football stadium or a part of a detailed instrument being handled. Haptic signals adapt to meaning-making in a contextualized process where the signals function the same way as other, arbitrary communicative signals refer to entities in our surroundings and everyday lives. The meaning of a signal is often adjusted with additional 72 : Eli Raanes

inputs provided by spoken or signed language. Introducing a haptic signal such as room establishes an understanding of an entity that further information may be added in relation to, for example, where the others are located. When the information of a location is presented, it is not necessary to repeat the signal room when using the reference. The interpreter may continue using the set format to provide information about the actions from the others simply by using pointing signals (i.e., on the deaf-blind person’s back) that present the location of the others’ actions oriented at different sides of the established referential space of “the room.” The overview of space built up by haptic signals may be useful and is strongly linked to cognitive orientation and memory. The participants in dataset B were interviewed up to six months after their meeting, which provided for some interesting results. The informants retold some happenings during the prior meeting. When describing the meeting, the participants used body movement and finger pointing in various directions to refer to attending persons and actions. Doing so, they indirectly demonstrated their memory about space and location among the participants. They were additionally asked about their actual placement during the meeting, and despite their deaf-blindness, they all had a clear concept of the space and where activities were done. The references to others’ locations and what had been going on in the various parts of the communicative event were an integral part of their references, as related to a memorized and established map of the situation provided by environmental descriptions and haptic signals. The information was an established memory, which indicates the importance of linking information about space to the interpretation of utterances. Haptic communication may be an effective tool to establish information about space. Several examples in the analyses show how closely linked the haptic signals are to the ongoing interaction. In dataset A, the interpreter presented information based on how she understood Lea’s intentions and actions, thus providing access to relevant actions that were important to Lea. During the meeting, the interpreter was attentive and alert to what was going on. Many of the haptic signals were presented with a minimum of processing time, making it possible for Lea to adjust her interaction and response to the group of which she was a part. The signals enabled Lea to focus on her own and others’ contributions to the discussions. In the example of raising a hand to request one’s turn in formal meetings, the overall effect of the haptic signals functions as a vital input that helps Haptic Signals in Interaction With Deaf-Blind Persons  :  73

the meeting’s communicative flow run smoothly for all. Recognition of Lea’s request was given and nearly simultaneously communicated to her. During Lea’s turn, the interpreter continued to add haptic signals to make Lea aware of the others’ response and of what was happening in her surroundings. This empowered her to participate in and influence the discussion. The example where a student stepped forward to introduce herself to Lea offers several aspects for discussion. The haptic signals in Figure 4.2 were given as the interpreter informed the instructor that she and her student were about to establish direct contact. The interpreter herself “played along” with the situation, not only by providing an environmental description about the student approaching, but also by facing the student and smiling. The interpreter’s involvement when she smiled may be seen as simultaneous feedback and part of managing the interaction in a way that does not disturb the process of interaction and communication. This builds on dialogical approaches to interpretation (Wadensjö, 1998) and also on an understanding of the role space where interpreters operate in the situation (Llewellyn-Jones & Lee, 2013, p. 58). The haptic signal for walking toward you, as depicted in Figure 4.2, conveys several layers of simultaneous information, where the meaning may be summed up as “walking toward you, coming from the back of the room and addressing you from the left.” The form of the haptic signal may be understood as building on the linguistic potential in the nature of human languages. The signal creates a meaning-making process that develops through gestural inputs and builds on the metaphorical use of these inputs. References to space, distance, and movement and the use of a handshape to illustrate “a person stepping forward” were used metaphorically during the sequence of signals that traced a path on Lea’s back. The signal provides a complex amount of information, presented simultaneously during the interaction. As seen through the lens of Goffman’s (1971) theory of interaction, the haptic signals may be seen as useful tools for addressing ongoing frontstage activity in formal meetings. In our data, haptic signals provided essential information about the other participants, whose responses are brought to the deaf-blind person’s knowledge. The two datasets also demonstrate another function of haptic signals, namely as brief sequences of exchange between deaf-blind persons and their interpreter (feedback in a backstage position) when clarification is needed or when the interpreted information needs to be negotiated without involving the other 74 : Eli Raanes

interlocutors. The haptic signals may be seen as discreet and effective signals. If the deaf-blind person directly asks for clarification, he or she turns toward the interpreter, hesitates, or uses a sign or his or her voice to express the need for something to be repeated or clarified. Goffman’s terms backstage and frontstage reflect the function of our findings from our datasets and demonstrate how the signals from the ongoing interaction during formal meetings may be understood in different ways (Berge & Raanes, 2013; Goffman, 1971). All the response signals, whether performed by backstage or frontstage haptic signals, are instrumental in facilitating the ongoing activity, providing participants with access to critical information, and helping the whole group to carry out their tasks and follow up the group’s agenda (see Raanes & Berge, 2017). In addition, a previous analysis of dataset B showed how backstage sequences between deaf-blind participants and their interpreters serve to provide clarification when needed (Berge & Raanes, 2013, pp. 361–364). The haptic signals involve certain elements from iconic gestures and may support the process of understanding similar schemas involved in gestural communication and signing. Some established structures from signs and gestures are involved in several of the haptic signals, as seen in the articulation of the signals. This may be the reason why users of haptic signals find them easy to understand and use, even when the signals are produced simultaneously along with spoken or signed communication. Although they use an alternative modality (touch and tactile information), the haptic signals are based on schemas of concepts for information that make the signals useful as a language input. Cognitive grammar offers models for such an understanding of the signals, in accordance with Taylor’s (2002) definition of language: A language . . . is understood as a set of resources that are available to language users for the symbolization of thought, and for the communication of these symbolizations. Acquiring a language consists in building up the repertoire of resources, through actual encounters with usage events. Using a language consists in selectively activating these resources, in accordance with the task in hand. (p. 30) Language and cognition are closely connected, and expressing and understanding inputs from multimodal approaches fits well into this broader definition of language use. Even though haptic signals represent a communicative tool that should not be understood as an independent language per se, they help in building up the repertoire of resources activated Haptic Signals in Interaction With Deaf-Blind Persons  :  75

when using language. They are produced on the deaf-blind person’s back, where the interpreter uses his or her hand to signal a specific articulation with a given orientation, hand form, and movement and where some of the elements from a sign/gesture for the meaning (as when pointing out a direction) may be included in the signal. Several of the signals presented in this chapter partly use structures in signs from signed language, such as the sign for walking toward you in Figure  4.3 and look in Figure  4.4 (made with two fingers tracing a path on the deaf-blind person’s back). The construction of the signals uses a potential to create and invoke the signal’s meaning by using similar hand forms and movements from signed language. This close connection to visual signs and gestural, multimodal expressions makes many of the haptic signals easily adoptable and comprehensible in interaction. Nevertheless, the signals do undergo a reshaping and transformation due to a modality available by touch rather than sight.

Conclusion

Tactile signals have been used as an essential technique for as long as we have known of deaf-blind communication. Today, this technique has developed and been conventionalized in systems of haptic signals that are used in deaf-blind communities all over the world. Through the work of national agencies, such as organizations and associations for the deafblind, this way of connecting to environmental activities and responses is taught both to deaf-blind persons and to interpreters and other professionals who assist them. Tailored to each individual user, the signals have the potential to be useful both to individuals who use a sign language and to those who use a spoken language as their primary communication method. Our study demonstrates how some haptic signals may be used in interpreting, how they may be organized, and how interactional space is reconfigured through embodied haptic signals. The interpreters alternate their actions between mediating spoken utterances, describing the meeting context, and producing haptic signals. Team interpretation makes it possible to provide tactile sign language interpretation (or interpreting in a limited signing space) in tandem with an interpreter providing haptic signals. If the deaf-blind person has some (although severely reduced) hearing, interpreters may add haptic signals to augment the information 76 : Eli Raanes

provided by microphones and sound amplification. Conversely, some deaf-blind persons find the technique disturbing and prefer that haptic signals not be used. Using haptic signals, information may be given simultaneously and effectively to ensure the communicative flow in a given situation. Although haptic signals are not a part of visual sign languages, the examples presented in this chapter show how the concept of this tactile system may build on basic elements influenced by visual sign language and on an awareness of interaction as a bodily presence in relation to other objects in the physical world. Our findings indicate that the interpreter’s actions are based on a situated, moment-by-moment evaluation of the participation framework in which all the participants, both the interpreters and the deaf-blind persons, operate. As a tactile, multimodal tool, haptic signals provide deaf-blind people with access to their environment and offer certain advantages for communication and interpreter services within the deaf-blind community.

References

Berge, S. S., & Raanes, E. (2013). Coordinating the chain of utterances: An analysis of communicative flow and turn taking in an interpreted group dialogue for deaf-blind persons. Sign Language Studies, 13(3), 350–371. Berge, S., & Thomassen, G. (2016). Visual access in interpreter-mediated learning situations for deaf and hard-of-hearing high-school students where an artifact is in use. Journal of Deaf Studies and Deaf Education, 21(2), 187–199. Bjørge, H. K., Rehder, K.  G., & Øverås, M. (2015). Haptic communication. New York, NY: Helen Keller National Center/Hapti-Co. Edwards, T. (2014). From compensation to integration: Effects of the pro-­tactile movement in the sublexical structure of tactile American Sign Language. Journal of Pragmatics, 69, 22–41. Erlenkamp, S., Amundsen, G., Berge, S. S., Grande, T., Mjøen, O. M., & Raanes, E. (2011). Becoming the ears, eyes, voice, and hands of someone else: Educating generalist interpreters in a three-year programme. In L. Leeson, S. Wurm, & M. Vermeerbergen (Eds.), Signed language interpreting: Preparation, practice, and performance (pp. 12–36). Manchester, United Kingdom: St. Jerome. Goffman, E. (1971). The presentation of self in everyday life. Harmondsworth, United Kingdom: Penguin. Keller, J. (1864). Fortellinger om de blinde døvstumme. [Stories about the blind deaf-mute]. Nordiske Blade for Døvstumme, 6(2), 61–62. Haptic Signals in Interaction With Deaf-Blind Persons  :  77

Knoblauch, H., Schnettler, B., & Raab, J. (2006). Video-analysis: Metho­ d­ ological aspects of interpretative audiovisual analysis in social research. In H. Knoblauch, B. Schnettler, J. Raab, & H.-G. Soeffner (Eds.), Video analysis: Methodology and methods: Qualitative audiovisual data analysis in sociology (pp. 9–28). Frankfurt, Germany: Peter Lang. Lahtinen, R. (2008). Haptices and haptemes: A case study of developmental process in social-haptic communication of acquired deafblind people. University of Helsinki. A1 Management, Essex. Lahtinen, R., Palmer, R., & Lahtinen, M. (2010). Environmental description: For visually and dual sensory impaired people. Frinton-on-Sea, United Kingdom: A1 Management UK. Lahtinen, R., Palmer, R., & Ojalac, S. (2012). Visual art experiences through touch using haptices. Procedia: Social and Behavioral Sciences, 45, 268–276. Linell, P. (1998). IMPACT: Studies in language and society: Vol. 3. Approaching dialogue: Talk, interaction and contexts in dialogical perspective. Amsterdam, the Netherlands: John Benjamins. Linell, P. (2009). Rethinking language, mind and world dialogically: Interactional and contextual theories of human sense-making. Charlotte, NC: Information Age. Llewellyn-Jones, P., & Lee, G. R. (2013). Getting to the core of role: Defining interpreters’ role-space. International Journal of Interpreter Education, 5(2), 54–72. Mesch, J. (2000). Tactile Swedish Sign Language: Turn taking in conversations of people who are deaf and blind. In M. Metzger (Ed.), Bilingualism and identity in deaf communities (pp. 187–203). Washington, DC: Gallaudet University Press. Mesch, J., Raanes, E., & Ferrara, L. (2015). Co-forming real space blends in tactile signed language dialogues. Cognitive Linguistics, 26(2), 261–287. Næss, T. (2006). Å fange omgivelsene: Kontekstuell tilnærming ved ervervet døvblindhet. [Capturing the surroundings: Contextual approaches in the event of acquired deafblindness]. Heggedal, Norway: CoCo Haptisk. Nielsen, G. (2012). 103 haptic signals: A reference book. Taastrup, Denmark: Danish Association of the Deafblind/Graphic Studio. Raanes, E. (2006). Å gripe inntrykk og uttrykk: Interaksjon og meningsdanning i døvblindes samtaler: En studie av et utvalg dialoger på taktilt norsk tegnspråk [Catching impressions and expressions: Interaction and meaning construction in deafblind people’s conversations: A study on a selection of tactile Norwegian Sign Language dialogues] (Doctoral dissertation). Norwegian University of Science and Technology, Trondheim, Norway.

78 : Eli Raanes

Raanes, E., & Berge, S. S. (2017). Sign language interpreters’ use of haptic signs in interpreted meetings with deafblind persons. Journal of Pragmatics, 107, 91–104. Skåren, A.-L. (2011). “Det øynene ikke ser og ørene ikke hører”: En kvalitativ intervjustudie om døvblindes opplevelse av å bruke haptiske signaler i samhandling med andre. [“What the eyes don’t see and the ears don’t hear”: A qualitative study interviewing deafblind people about their experiences of haptic signals in interaction with others]. Trondheim, Norway: Norwegian University of Science and Technology. Taub, S. (2001). Language from the body: Iconicity and metaphor in American Sign Language. Cambridge, United Kingdom: Cambridge University Press. Taylor, J. R. (2002). Cognitive grammar. Oxford textbooks in linguistics. Oxford, United Kingdom: Oxford University Press. Wadensjö, C. (1998). Interpreting as interaction. London, United Kingdom: Longman. World Association of Sign Language Interpreters. (2013). Deafblind interpreter education guidelines. Retrieved from http://wasli.org/wp-content/ uploads/2013/06/279_wasli-db-interpreter-education-guidelines-1.pdf

Haptic Signals in Interaction With Deaf-Blind Persons  :  79

Chapter 5 Overlapping Circles or Rather an Onion:The Position of Flemish Sign Language Interpreters Vis-à-Vis the Flemish Deaf Community Eline Devoldere and Myriam Vermeerbergen

This chapter discusses the position of Flemish sign language interpreters (SLIs) in relation to the Flemish Deaf community. The study is related to (1) a number of recent evolutions within the Deaf community and the perception of community membership, also with regard to hearing members (see, e.g., Kusters, De Meulder, & O’Brien, 2017; Napier & Leeson, 2016), as well as (2) the professionalization of signed language interpreting and the changed position of SLIs vis-à-vis Deaf communities (Cokely, 2005; Leeson & Vermeerbergen, 2010). Early descriptions of the Deaf community were often rather narrow and exclusive, and membership was defined based on a set of criteria such as the use of signed language, having attended a deaf school (including having attended a deaf boarding school), whether their parents were deaf or hearing, and the like. More recently, the focus in the literature has shifted to self-identification, and often, the diversity within the Deaf “community” is emphasized. Considerations of Deaf community membership include the question of whether or not hearing status is important. What about hearing people or interpreters who were not born and raised within the Deaf community but who came into contact with it at a later stage? The current study focuses on their position vis-à-vis the Deaf community. We interviewed both Flemish SLIs and deaf people.

The research was conducted by Eline Devoldere within the framework of a master’s degree in interpreting and was supervised by Myriam Vermeerbergen (Devoldere, 2016). Both authors are L2 users of Flemish Sign Language. Eline is currently working as a full-time sign language interpreter (SLI); Myriam is a SLI trainer and a researcher mainly working in signed language linguistics. 80

In addition, we checked whether the professionalization of SLIs in Flanders has influenced the overall situation of Deaf community membership by involving deaf informants of two different age groups. Before presenting our study, we will provide some necessary background on Flemish Sign Language, the Flemish Deaf community, and signed language interpreting in Flanders.

Background Information

Flemish Sign Language and the Flemish Deaf Community Flemish Sign Language (Vlaamse Gebarentaal [VGT]) is the signed language used in Flanders, the northern part of Belgium, where Dutch is the dominant spoken language. VGT is clearly related to Langue des Signes de Belgique Francophone, the signed language used in Wallonia, the southern part of the country. In 2006, the Flemish Parliament officially recognized VGT as the language of the Flemish Deaf community. The Flemish Deaf community is estimated to include approximately 6,000 deaf signers (Loots et al., 2003). Following a number of important evolutions, the Deaf community has recently undergone—and is still undergoing—important changes. There are currently significant differences between deaf people of various generations in Flanders, and elsewhere. This is most likely due to cochlear implants and mainstream education, the professionalization of signed language interpreting, and the emancipation process within the Deaf community (De Meulder, 2008). However apart from these evolutions, which concern only or especially deaf people, there are also broader societal ones. These include a greater openness to diversity, the democratization of higher education, internationalization and globalization, the importance of social media, and so on. Therefore, it is safe to say that the Flemish Deaf community is a very heterogeneous one, probably more so than ever before.

The Professionalization of Flemish Sign Language Interpreters The year 1981 marks the official start of signed language interpreting training in Flanders. In that year, Fevlado (the Federation of Flemish Deaf Organizations, currently called Doof Vlaanderen) initiated an Overlapping Circles or Rather an Onion  :  81

“interpreter for the deaf” training program in Ghent. In this part-time program, students were taught Nederlands met Gebaren (Signed Dutch), the signed system developed by Fevlado (Van Herreweghe & Vermeerbergen, 2006). The first qualified interpreters graduated in 1984.1 At that time, there was no legislation regarding the payment of professional SLIs, nor was there much demand. Some of the students, mostly Codas,2 already had good signing skills before starting the training and were acting as ad hoc interpreters. In the early 1990s, there were some important changes. In 1991, the Flemish interpreting agency funded by the government, the Vlaams Communicatie Assistentie Bureau voor Doven (CAB; Flemish Office for Communication and Assistance for the Deaf), was founded, and in 1994, the very first “Decree related to establishing regulations by which the Flemish Fund for Social Integration for People with a Handicap can cover the cost of assistance from deaf interpreters” was approved. From then on, most students who started the program did so with the specific aim of becoming professional interpreters. They hardly had any prior knowledge of deafness, Deaf culture, or the Deaf community, and they lacked signing skills (Van Herreweghe & Vermeerbergen, 2006). Toward the mid-1990s, both Flemish interpreter training programs moved away from Signed Dutch and started to include (initially mainly theoretical) information on the linguistics of Flemish Sign Language (Van Herreweghe & Vermeerbergen, 2006). This gradual change involved attempts to engage deaf signers as co-trainers. Students were encouraged to engage with the Deaf community and to improve their productive and receptive language skills through interaction with deaf signers. Since the 2008–2009 academic year, there is also a full-time academic program for Flemish SLIs at KU Leuven (Vermeerbergen & Russell, 2017). It is clear that the professionalization of signed language interpreting in Flanders is mostly in line with Cokely’s (2005) interpretation of the way signed language interpreting evolved as a profession in the United States. In Flanders as well, the Deaf community was actively involved in setting up interpreter training. In addition, Fevlado played a role in the establishment of the first professional body of SLIs and in the foundation of CAB, the Flemish interpreting agency. However, the Deaf community has increasingly lost control, especially when it comes to selecting who will become their interpreters. In theory, deaf people decide which SLI they want to work with, but the choice is often limited. The majority of 82 : Eline Devoldere and Myriam Vermeerbergen

Flemish SLIs work as freelancers, combining interpreting with another job, resulting in limited availability. Nevertheless, we should point out that signed language interpreting has become available in more settings and domains as of late (e.g., in primary education and on television). Subsequently, the number of interpreters working full time as SLIs has been growing. It is also important to note that, although there have always been deaf people who worked as “interpreters” (e.g., negotiating meaning between their peers and hearing teachers in deaf schools) (Leeson & Vermeerbergen, 2010), Flemish deaf interpreters (DIs) have only recently become visible in mainstream society. Unfortunately, there are no training opportunities for DIs in Flanders yet (Vermeerbergen & De Weerdt, 2018).

Research Topic and Questions

It seems fair to assume that the professionalization of signed language interpreting, the resulting shift in the position of SLIs vis-à-vis the Deaf community, and the changed ideas about membership of the Deaf community as discussed in the international literature have shaped the views of Flemish Deaf community members and SLIs on their relationship with each other. In the literature, the position of SLIs is often presented as “in between.” Cokely (2005), for example, writes: Interpreters have always occupied a unique social and cultural position relative to the communities within which they work. It is they who are positioned “between worlds” and who make possible communication with “outsiders.” (p. 3) Mindess (2006) also discussed the relationship between the SLI and the deaf client. Many SLIs learn to sign fluently by being active in and maintaining contact with members of the Deaf community, but deaf people sometimes find it rather unpleasant to think that these interpreters are making money off them. Therefore, they often see the interpreter as neither an insider nor an outsider. The SLI is regularly positioned in the middle of a spectrum. On the one hand, deaf people appreciate the fact that SLIs facilitate communication. On the other hand, they sometimes think that SLIs have not fully internalized Deaf culture. To our knowledge, there is no literature on the ideas and opinions of Flemish deaf people and Flemish SLIs regarding this situation, a gap that Overlapping Circles or Rather an Onion  :  83

the current study attempts to fill. The overall objective of this research has two parts: (1) to investigate the ideas and opinions about membership of the Deaf community in general, and (2) to look at the current specific relationship between Flemish SLIs and the Flemish Deaf community. Flemish SLIs know the language and culture of the Flemish Deaf community. They facilitate communication between hearing and deaf people on a daily basis and, thus, in a certain sense, form a bridge between the two communities. Do they also position themselves between these two communities? Do they see themselves as members of both? Or is their relationship with deaf people purely professional? And what do members of the Flemish Deaf community have to say about this? The main research questions addressed in this chapter are as follows: 1.  Who is welcomed by and included in the Deaf community? 2.  Are VGT interpreters welcomed by and included in the Deaf community? If not, are they perceived as being between the hearing and Deaf “worlds”? The professionalization of signed language interpreting in Flanders only occurred relatively recently, and whereas deaf people younger than 25 to 30 years old have always known professional SLIs, this is not the case for older signers, who have experienced the change in relationship between the Deaf community and SLIs themselves. That is why both younger and older deaf informants are included in the current study. Methodology

Survey and Interviews The methodology of this study consisted of two parts: a short survey and face-to-face interviews. All informants participated in both parts. The survey was composed of closed-ended questions with three possible answers (“yes,” “no,” or “sometimes”) and addressed the two general research topics (see previous section). The questions included were the following: T opic 1. Who belongs to the Flemish Deaf community? 3

1.  Are all deaf and hard of hearing persons in Flanders members of the Deaf community? 84 : Eline Devoldere and Myriam Vermeerbergen

2.  Are deaf and hard of hearing persons with VGT as their first language members of the Deaf community? 3.  Are deaf and hard of hearing persons who only learned VGT later in life members of the Deaf community? 4.  Are deaf and hard of hearing persons with cochlear implants members of the Deaf community? 5.  Are Codas members of the Deaf community? 6.  Are Flemish hearing signed language researchers members of the Deaf community? 7.  Are hearing teachers in Flemish deaf education members of the Deaf community? Topic 2. The position of VGT interpreters vis-à-vis the Deaf community

8.  Are deaf VGT interpreters members of the Deaf community? 9.  Are hearing VGT interpreters who are also Codas members of the Deaf community? 10.  Are hearing VGT interpreters with VGT as a second language members of the Deaf community? These questions were presented to both the deaf informants and the interpreters. The following question was only answered by the deaf informants: 11.  Can VGT interpreters become members of the Deaf community if they are actively involved in it? The interpreters too received an additional question: 12.  Do you think that, as a VGT interpreter, you are a member of the Deaf community? The aim of the questionnaire was to collect quantitative data, but also to act as the starting point for the interview.4 During this interview, or conversation to be exact, we went through the survey responses, and participants were asked to explain their answers. It was also possible for informants to elaborate and clarify their answers by means of examples and comments. If there was any time left at the end of the interview and the informant was willing to continue, some additional open-ended questions were asked, namely: • According to you, what is meant by “Deaf community”? • What are the major factors that determine membership of the Deaf community? Overlapping Circles or Rather an Onion  :  85

• How can a hearing person/interpreter become a member of the Deaf community? • How does signed language interpreting today differ from how it was in the past? • What is the position of SLIs vis-à-vis the Deaf community? Interviews with hearing participants were conducted in Dutch, whereas VGT was used when interviewing the deaf informants. Interviews were audio- or videotaped. All data were collected between November 2015 and April 2016. Informants Twenty-one informants participated in the study. This group was divided into two subgroups: 1.  Twelve deaf members of the Flemish Deaf community: Six informants were younger than 30 years, and six were older than 45. The oldest informant was 86 years old. Each group consisted of three men and three women. Eleven of these 12 informants considered VGT to be their first language. We also asked informants whether they regularly use an SLI in their private lives and/or in education- or work-related situations. Two of the older informants responded “no,” and one of the informants in the younger group indicated she “rarely” uses one. The other participants gave a positive response. Tables 5.1 and 5.2 present an overview of the main characteristics of the deaf informants. 2.  Nine Flemish SLIs: One SLI was deaf (DI), and eight were hearing, of which one was a Coda. All informants in this group were female. Apart from the Coda and the DI, they all learned VGT later in life. One hearing interpreter graduated from the fulltime academic interpreting program at KU Leuven, whereas the other seven were trained in the part-time programs in Ghent and Mechelen. The DI was enrolled in the international European Master in Sign Language Interpreting program. The informants’ working experience ranged from 2 years to almost 30 years. Three SLIs were full-time interpreters; for the other six SLIs, it was their secondary occupation. Table 5.3 provides an overview of this group. 86 : Eline Devoldere and Myriam Vermeerbergen

Man Man Woman Man Woman Woman

D1 D2 D3 D4 D5 D6

Yes Yes No Yes No Yes

Deaf Relatives Yes Yes Yes Yes Yes No

Deaf School First language Mother tongue First language Mother tongue First language Mother tongue

VGT as . . .* Yes Yes Yes Yes Yes + notetaker Rarely + notetaker

Use of SLIs

— — — — CI CI

Extra Information

Gender

Woman Woman Man Man Man Woman

Informant Identifier

D7 D8 D9 D10 D11 D12

Yes Yes Yes Yes Yes No

Deaf Relatives

Table 5.2. Deaf Informants Older Than 45 Years.

Yes Yes Yes Yes Yes Yes

Deaf School

First language First language First language First language First language Second language

VGT as . . .

No Yes Yes + remote SLI Yes Yes No

Use of SLIs

> 60 45–50 45–50 45–50 > 60 > 60

Age Group (years)

• Mother tongue is used when a deaf signer learned sign language from his or her deaf parents. • First language is used when a deaf person did not acquire sign language from his or her parents but (self-)identifies with a sign language as his or her first language.

*From De Weerdt and De Meulder (2007):

Gender

Informant Identifier

Table 5.1. Deaf Informants Younger Than 30 Years.

Hearing Hearing Hearing Hearing Hearing Hearing Hearing Deaf Hearing; Coda

Hearing/Deaf/Coda Main occupation Secondary occupation Secondary occupation Main occupation Main occupation Secondary occupation Secondary occupation Ad hoc; secondary occupation Secondary occupation

SLI as Main or Secondary Occupation No Yes No No No No No No Yes

Deaf Relatives

10–15 0–5 25–30 5–10 0–5 0–5 5–10 —* 10–15

Experience (years)

*No experience as a certified SLI yet, but a great deal of experience as a deaf SLI within the Deaf community, at conferences, and so on.

SLI1 SLI2 SLI3 SLI4 SLI5 SLI6 SLI7 SLI8 SLI9

Informant Identifier

Table 5.3. SLI Informants.

Results

The study focused on two topics: (1) membership of the Deaf community and (2) the position of SLIs in relation to the Deaf community. In this section, we first present an overview of the responses to the questionnaire. This is followed by a discussion of the ideas and opinions expressed during the interviews. Membership of the Deaf Community The general question “Do all deaf and hard of hearing people belong to the Deaf community?” was intended as an introductory question to which we expected nuanced responses (Table 5.4). However, 10 of the 12 deaf informants responded “yes,” and two said “no.” Deaf informants clarified their response in the interviews by saying that all deaf and hard of hearing persons are welcome in the Deaf community. Informant D4 explained it as follows: All deaf people can automatically communicate with each other, while communication with hearing people is much harder. They do not feel 100% at home in the hearing community. (D4) The two deaf informants who said “no” explained that not all deaf and hard of hearing people are involved in the Deaf community, for instance, because they do not know Flemish Sign Language. The vast majority of the SLIs responded “no,” saying that—in most cases—not all deaf people grow up with Flemish Sign Language and Deaf culture. Another argument for this response was that not all deaf and hard of hearing people choose to self-identify as members of the Deaf community: The Deaf community is a part of the hard of hearing or deaf person’s identity. And that identity is a personal choice. (SLI6) The only SLI who responded “yes” to this question was the DI. Less surprisingly, the deaf informants unanimously agreed that all deaf and hard of hearing L1 signers (i.e., deaf and hard of hearing people with VGT as their first language) belong to the Deaf community. One of the younger informants stated: Those who can sign automatically belong to the Deaf community, because we all use a common language. (D2) Overlapping Circles or Rather an Onion  :  89

Yes 5 6 5 3 3 1 1

Who Is a Member of the Deaf Community?

All deaf and hard of hearing people? Deaf/hard of hearing and VGT as first language? Deaf/hard of hearing and VGT acquired later in life? Deaf/hard of hearing with CI? Codas? Hearing sign language researchers? Hearing teachers of the deaf?

1 0 0 0 0 1 3

No 0 0 1 3 3 4 2

Sometimes

Deaf Respondents < 30 Years Old

Table 5.4. Overview of Responses to Part 1 of Questionnaire.

5 6 3 3 1 3 1

Yes 1 0 0 0 0 1 2

No

0 0 3 3 5 2 3

Sometimes

Deaf Respondents > 45 Years Old

1 4 3 2 3 0 0

Yes

7 0 0 0 0 1 2

No

1 5 6 7 6 8 7

Sometimes

VGT Interpreters

Five of the nine interpreters answered this question with “sometimes.” Again, the major reason for this response can be found in the importance of self-identification; these SLI informants claimed that some deaf people may decide to not, or no longer, be part of the Deaf community: Flemish Sign Language is a very important factor when it comes to being a member of the Deaf community. However, there are deaf and hard of hearing people who sign but who do not identify with the Deaf community. This is each person’s own choice. (SLI2) When asked about whether deaf or hard of hearing people who learned VGT later in life and deaf or hard of hearing people with cochlear implants are part of the Deaf community, most of the SLI informants also opted for “sometimes,” whereas the deaf informants were more positive. The deaf participants who responded “sometimes” for the first group in question said that it depends on the individual’s personality and/or attitude, for example: If that specific person picks up VGT easily and has an open and active attitude, he/she will quickly be considered a member of the Deaf community. (D3) However, the importance of the Deaf community’s attitude was also discussed: Deaf people also need to learn to welcome these people. Some do not and I think that is a pity. (D12) The DI felt that all deaf people belong to the Deaf community, because they all encounter the same barriers in their daily lives. However, she claimed that active involvement is best categorized on a continuum: There are deaf and hard of hearing persons who are actively involved and who use signed language as their first language. They will tend to be active members of the community. Others, e.g., deaf persons who learn signed language later in life, may be situated more on the other side of the scale or somewhere in the middle. But they are also members of the Deaf community. (SLI8) With regard to membership of deaf people with a cochlear implant, informants commented that a deaf person with a cochlear implant remains a deaf person. Furthermore, they talked about the importance of signed language knowledge and proficiency, as well as attitude; it depends Overlapping Circles or Rather an Onion  :  91

on whether or not the deaf person with a cochlear implant considers signed language and Deaf community membership to be an added value in his or her life. A third factor that was discussed was the perspective and attitude of people within the Deaf community: I do not think that deaf people with a cochlear implant should be seen differently. A lot of deaf people have a problem with individuals with an implant, but more and more deaf and hard of hearing persons receive them. If they do not accept these people, the Deaf community will decrease and maybe even disappear. (D5) Indeed, most informants also commented on the fact that, currently in Flanders, most deaf babies and/or very young children receive a cochlear implant and that this is the parents’ decision, not the deaf child’s. The opinions about whether Codas can be members of the Deaf community were very diverse in both groups. Those who said “yes” referred to the fact that Codas often have VGT as their mother tongue. Other informants discussed the role of the deaf parents and whether or not they bring their child into contact with the Deaf community. I never took my own children to see the Deaf community. That is partly due to the upbringing I received myself. I always had to speak and was dissuaded from using signed language. I realize this now and I think it is a pity. But times have changed significantly. The world is more open to deaf people and signed language now, compared to before. (D12) The Coda’s own attitude and actions were also considered to be important and were discussed by both the deaf and the SLI informants: Only Codas who are really connected to deaf people are members of the Deaf community. However, there are Codas who are ashamed of their parents being deaf. They can only belong to the Deaf community when they accept and are fine with the fact that their parents are deaf. (D4) Most Codas will have belonged to the Deaf community as a child, I suspect. But it depends on what path they choose later on. Do they decide to live in two worlds or do they choose the hearing world? (SLI4) The Coda SLI informant had the following to say: My hearing brother and I grew up in a deaf family. I used to go to deaf activities more often when I was younger, though. My brother is 92 : Eline Devoldere and Myriam Vermeerbergen

less fluent in VGT than I am, but he will still be more easily allowed into the Deaf community than interpreting students, I think. That is because he really grew up with Deaf culture. (SLI9) The informants also shared their opinions on hearing people who are sometimes said to belong to the “Third Culture” (Bienvenu, 1987), which includes hearing signed language researchers and hearing teachers of the deaf. Here, there was a difference between the deaf informants and the SLI informants, especially concerning hearing signed language researchers. Some of the older deaf informants were very positive about hearing researchers and saw them as a part of the Deaf community. They emphasized the importance of their work: Yes, of course! They conduct research on signed language or Deaf culture. The work they do makes us stronger. The more exposure signed language receives, the better. It is true that they do not often go to a lot of activities, but they are still members of the Deaf community. (D11) Five of the 12 deaf informants said “no” when asked whether hearing teachers of the deaf are part of the Deaf community. We should point out here that there are relatively few hearing teachers in deaf education who are proficient in Flemish Sign Language. In particular, the younger deaf informants commented on this and said that hearing teachers often do not appreciate the value of signed language and Deaf culture. One of the older deaf informants indicated: When I was in the deaf school, the sisters did not know signed language. We always had to speak. I do not know what it is like at the deaf schools nowadays, but for me personally, hearing people cannot belong to the Deaf community. (D7) Most of the interpreters said that hearing teachers are “sometimes” members of the Deaf community, depending on whether or not they have contact with deaf people outside their job or whether they have a deaf husband or children. However, they also stated that they did not understand why teachers generally do not learn to sign: Personally, I do not know a single teacher who is integrated in the Deaf community. Maybe it is different now, but to me, membership mostly means that you actively participate in the activities of the community and that you know Flemish Sign Language. (SLI3) Overlapping Circles or Rather an Onion  :  93

To summarize, SLIs mostly responded “sometimes” for all groups of potential members of the Deaf community. On the other hand, deaf informants mostly said “yes” when asked about membership of deaf and hard of hearing people and “sometimes” when talking about hear­ ing researchers and teachers. When elaborating on their response, it became clear that knowledge and use of Flemish Sign Language, selfidentification, and active engagement within the Deaf community are important factors for membership. Informants also talked about “being accepted,” but they were not always clear about what this means exactly. Especially in the case of deaf (potential) members of the Deaf community, factors such as having deaf parents and having received deaf education have clearly become less important than they were in the past. Toward the end of the interviews, the definition of Deaf community was discussed. When asked what is meant by Deaf community, most informants—both deaf and SLIs—defined the Deaf community as a place where only deaf people meet. This seems to contradict the results from the questionnaire and earlier comments on the conditions for hearing people to become members of the Deaf community. All informants emphasized the importance of the Deaf community for deaf people as a space to meet with peers, communicate in a signed language, and develop their identity. Informants also talked about the evolution of the notion Deaf community, and there were some suggestions for alternatives. It may no longer be accurate to explain the Deaf community in the same way as it was understood in the past; as the DI states: For me, the notion of “Deaf community” may continue to exist, but we need to discuss it. Everyone needs to know what it used to mean, but that it has changed a lot over time. Additionally, we need to think about what the future of the Deaf community may hold. (SLI8)

The Position of SLIs in Relation to the Deaf Community In the second part of this study, we asked all informants whether they think SLIs belong to the Deaf community (Table 5.5). We made a distinction between deaf persons, Codas, and hearing SLIs. Eleven of the 12 deaf informants responded that deaf SLIs are evidently part of the Deaf community because they are deaf, are fluent 94 : Eline Devoldere and Myriam Vermeerbergen

Do the following SLIs belong to the Deaf community? Deaf interpreters Coda interpreters Hearing interpreters Can SLIs become members if they are actively involved? Are you yourself (as an SLI) a member of the Deaf community? 6 3 2 2 —

Yes

Table 5.5. Overview of Responses to Part 2 of Questionnaire.

0 0 1 0 —

No 0 3 3 4 —

Sometimes

Deaf Informants < 30 Years Old

5 1 4 3 —

Yes 0 1 1 1 —

No

1 4 1 2 —

Sometimes

Deaf Informants > 45 Years Old

6 6 0 — 4

Yes

0 0 1 — 2

No

3 3 8 — 3

Sometimes

VGT Interpreters

signers, know about Deaf culture, and already have a connection with the community: Of course! They are also deaf so this implies that they already belong to the Deaf community. They decide to become interpreters, but that does not make a difference. They are still part of it. (D2) Some older deaf informants, though, were not familiar with the term deaf SLI. They assumed that by dove tolk (deaf interpreter) we meant doventolk (interpreter for the deaf), which is the old term used in Flanders to refer to SLIs. Some SLI informants stressed that it remains the deaf SLIs’ choice whether or not they want to be members of the Deaf community. Moreover, they indicated that it is very important for the DI to remain active within the community. Absolutely! A hearing SLI can still decide whether or not he/she wants to be involved in the Deaf community. But if you are deaf and decide to become an interpreter, I think you do it out of a sense of duty. However, these deaf interpreters also have another role: they are active in Deaf organizations, . . . Maybe they are also a kind of deaf “leader,” someone that others look up to. (SLI7) The DI said it is not always easy to be a DI in the Deaf community, especially when it comes to maintaining a professional boundary: I tend to distinguish three groups. There is a group of deaf people I have never seen before and that I do not know. If I interpret for them, I have no problems maintaining a professional boundary. The second group consists of people I know from the Deaf organization. If I interpret for them, I can uphold that boundary too. The third group is the most difficult one. These are good friends that I am in regular contact with. If I interpret for them, we have to trust each other. If we meet up afterwards, we have to act as if nothing happened.  .  . I really need to be aware of where my professional boundary is and safeguard it. (SLI8) Most of the deaf informants stated that Coda SLIs are a part of the community, because their deaf parents already introduced them to it at a young age. Moreover, they are fluent signers. Three of the six younger deaf informants said that it differs from person to person; some Codas do not want to have contact with deaf people outside the scope of their job. 96 : Eline Devoldere and Myriam Vermeerbergen

In general, though, the deaf informants in this study believed that Codas are able to build a connection with the Deaf community more easily because of their deaf parents, even if they were not very active in the community themselves. I feel Coda interpreters are always very good interpreters, because they grew up with signed language. I can see a difference between them and other trained hearing interpreters. Nonetheless, there are Coda interpreters who dissociate themselves from the Deaf community and therefore do not belong to it. (D8) The SLI informants had a similar view. They indicated that Codas are usually “connected” to the community—and have been since they were young—and often have VGT as their first language. Furthermore, they stated that when a Coda decides to be an interpreter, this is out of a sense of duty for the community. The interpreters, however, also say it remains the choice of the SLI whether he or she wants to be a member of the community. Coda interpreters still have to decide for themselves: is the Deaf community really something for me or not? I feel that it is not always easy for a Coda interpreter to maintain a professional boundary, especially if he/she interprets for deaf people he/she knows really well. (SLI6) In the words of the Coda SLI who participated in our study: Interpreters who show commitment towards the Deaf community will most certainly be accepted. But there is a deeper level within the community, for example things that deaf people have experienced that we have not. This still makes us outsiders. I know the Deaf community from within, but I do not know what it is like to be deaf. I have never experienced it firsthand, so I cannot possibly belong to the core of that community. (SLI9) Views on the position of hearing SLIs in relation to the Deaf community differed within our group of deaf informants. Some informants, although more in the older group than in the younger group, said that hearing SLIs do belong to the Deaf community because they know signed language and Deaf culture: Hearing SLIs are part of the Deaf community because they know [about] Deaf culture, which is different from the hearing culture. Overlapping Circles or Rather an Onion  :  97

If they know the culture and if they can sign well, they belong to the Deaf community. (D10) They also said that interpreters who frequently attend activities in the Deaf organizations and who often socialize with deaf people will more easily be accepted and seen as members of the community. However, personal choice was mentioned again as being very important. Two deaf informants, one in each age group, claimed that hearing SLIs cannot and do not belong to the Deaf community. They are only there to interpret between a signed and a spoken language, and they fully belong to the hearing community. In my opinion, you can see the difference between interpreters and deaf people. The signed language you learn when training to become an interpreter is totally different from the language used in everyday life. That is why I think they do not belong to the Deaf community. I know I am being harsh, but that is how the Deaf world works. (D3) When asked whether hearing SLIs can become members of the Deaf community if they are actively involved, five deaf informants said “yes,” and six indicated “sometimes.” One of the deaf informants stated that “acquiring membership status within the Deaf community” is often a process. Eight of the nine SLI informants said that hearing SLIs are “sometimes” members of the Deaf community. They stated that it depends on different factors such as: (1) the personal choice of the SLI—some interpreters make a conscious decision to maintain a clear boundary between their personal and professional life; (2) acceptance by the community, which may be obtained by being very involved in it and having a lot of contact with deaf people; and (3) whether the SLI has a deaf partner or a deaf child. If that is the case, the hearing SLI will probably become a member more easily. Answers among SLIs differed when asked whether they see themselves as members of the Deaf community. Four (of the nine) SLI informants, including the Coda and the DI, stated that they see themselves as members of the community. The DI indicated she is part of the Deaf community because she is deaf and encounters the same problems and barriers as other deaf people. The Coda interpreter said that being born and raised within the Deaf community has ensured that she is a part of it. The two hearing SLIs who said “yes” feel that they belong to the Deaf community 98 : Eline Devoldere and Myriam Vermeerbergen

because they have a lot of contact with deaf people outside the scope of their professional lives. They explained that deaf people often turn to them for help or advice and/or that they are asked to give their point of view on deaf-related subjects. To them, this shows that they are trusted and that their opinion is highly valued. Two interpreters stated that they do not belong to the Deaf community. They have contact with the deaf, even outside their job as interpreters, but they do not consider themselves to be true members of the community. As an interpreter you can have strong ties with deaf people, you can be aware of a lot of things and know many members of the Deaf community. But being a part of it is something else. You first have to feel like a member of a certain community before you can be considered one by others. (SLI5) The three other interpreters felt that they sometimes do and sometimes do not belong to the Deaf community and that this is, in part, their own conscious decision. This is further discussed in the concluding section.

Illustrated Viewpoints

At the end of the interview, informants were asked for a final comment on the position of SLIs in relation to the Deaf community. In many cases, informants clarified their view using illustrations, some of which are presented here. Most informants, both SLIs and deaf people, said that there is no fixed position for the SLI in relation to the Deaf community; they see it as more of a continuum, with the Deaf community on one side and the hearing community on the other (Figure 5.1). An SLI can be part of the Deaf community or part of the hearing community or be somewhere in the middle. The exact position is the choice of the interpreter; every SLI takes his or her own position on this continuum, which may also vary over time. One young deaf informant explained his view by means of overlapping circles (Figure 5.2). The bold circle on the left and the thinner circle on the right represent the Deaf and the hearing communities, respectively. In the middle, there is a dotted circle for the SLIs, which overlaps with the Overlapping Circles or Rather an Onion  :  99

Deaf community

SLI 1

SLI 2

SLI 3

Hearing community

Figure 5.1.  A continuum.

Deaf community

Hearing community

SLIs

Figure 5.2.  Overlapping circles.

Sign language community Deaf community

SLIs

Figure 5.3.  One large circle containing two smaller circles. two other circles. This illustrates that an SLI may sometimes be part of the Deaf community and sometimes not. This depends, for instance, on whether the SLI has a deaf partner and/or has regular contact with deaf people. The position of the interpreter may also vary over time. Figure 5.3 illustrates the view of one SLI who felt that SLIs will never truly belong to the Deaf community due to the fact that, as hearing people, they do not share the experiences of growing up deaf. However, SLIs are closely connected to the Deaf community and also belong to a larger community, namely the sign language community. Another SLI compared the Deaf community to an onion with different layers (Figure 5.4). The Deaf community is the heart of the onion, of 100 : Eline Devoldere and Myriam Vermeerbergen

Core of the Deaf community CODA SLIs Hearing SLIs Occasional SLIs

Figure 5.4.  Position of SLIs in relation to the DC.

which deaf SLIs are part. Coda SLIs (thin solid line circle) are outside of this community but are closely connected to it. A large group of SLIs (dashed circle) are hearing interpreters, with a signed language as a second, third, or fourth language, and they are situated in a higher layer. These SLIs are also often in contact with people from the Deaf community or have worked for the Deaf federation or a Deaf organization. The outer layer of the onion represents what this informant called the huistuin-en-keukentolken (house-garden-and-kitchen interpreters). With this term, the informant is referring to SLIs who only interpret occasionally and who do not have a lot of contact with deaf people outside their job. Nevertheless, this illustration implies that all hearing interpreters are merely “visitors in the Deaf community.”

The Different Groups Compared

Interpreters versus Members of the Deaf Community Looking at the overview of responses (Tables 5.4 and 5.5), it appears that there is a clear difference between the SLIs and the members of the Deaf community in that the first group mainly responded “sometimes,” especially to questions related to membership of the Deaf community. (For question 1,5 however, seven of the nine SLIs said “no,” whereas 10 of the 12 deaf informants answered “yes.”) Nevertheless, it was clear from the interviews that SLIs and members of the Deaf community often have similar views. It seems that SLIs were simply a bit more cautious and Overlapping Circles or Rather an Onion  :  101

opted for the safer option “sometimes,” rather than saying “yes.” Overall, the instances when the answer “no” was given are rather rare for both the SLI and the deaf respondents. We should point out here that the responses of the only DI in this study align with those of the (other) deaf informants. Older and Younger Members of the Deaf Community As already explained earlier, we had anticipated a possible divergence between the two groups of deaf informants (< 30 and > 45 years old) due to the professionalization of signed language interpreting in Flanders. However, such a difference was not confirmed by our study. Interestingly enough, the answers of the two older deaf informants did differ from the opinions of the rest of the group. We also found that these informants were not familiar with notions such as Coda, Deaf community, or deaf interpreter and that they did not use, or very rarely used, professional SLIs. Therefore, it seems that there may indeed be a discrepancy within the Deaf community, albeit not (only) as a result of the professionalization of signed language interpreting, but rather as a result of the emancipation process of the Flemish Deaf community (De Meulder, 2008, pp. 55–56). Furthermore, older deaf and hard of hearing people are often computer illiterate, which makes it hard for them to arrange professional SLIs. It should be noted, however, that it is also possible that these responses simply reflect the specific situation of these particular informants.

The Position of Flemish Sign Language Interpreters Vis-à-Vis the Flemish Deaf Community: A Constant State of Change

Flanders is a relatively small region with a relatively small Deaf community and a modest number of active interpreters. Flemish Sign Language is not very visible yet in mainstream society, which does not make it easy for those who want to learn the language, for professional purposes for instance. This is one of the reasons why the three programs for SLIs in Flanders strongly encourage their students to establish contacts with and within the Flemish Deaf community. Once they start working professionally, interpreters sometimes struggle with the boundary between their 102 : Eline Devoldere and Myriam Vermeerbergen

professional and private lives and with the different types of relationships they have with the same deaf person.6 This could explain why some interpreters reduce their participation in the deaf social life; some even prefer a strict separation. This decision, however, is often met with disapproval. One of our informants stated: I know interpreters who simply do their interpreting jobs and who do not keep up with what happens in the Deaf community. You cannot regard this group as members of the community. How can an interpreter stay connected with the language and culture without having any actual contact with deaf people? Interpreting alone is not enough. (SLI2) The deaf SLI explained: What does “private” mean? That they want nothing to do with deaf people and signed language anymore? That they want to be completely outside of the community? I do not really understand that. Interpreters can go for a drink with deaf people after their assignment; that is not an issue for me. Of course, this would not be in the capacity of interpreter, but just as a person who likes to stay in touch with deaf people. Yet, there are also deaf people who are reluctant to build a real friendship with an interpreter with whom they often and gladly work (Soetkin Bral, personal communication, November 2017). They fear the interpreter’s neutrality may be compromised if he or she becomes a friend. An interpreter who does not belong to our respondent group, but with whom we discussed the results, made an important comment in this regard: Interpreters know a lot. For some people, I know how much money they have in their bank account, what silly things their children do at school and whether or not their mother-in-law is a nice lady. This makes that interpreters are somewhat dangerous. We know so much confidential information, and despite the importance of the duty of professional confidentiality, everyone knows that it is not infallible. You have to have a massive amount of trust in interpreters to be able to use them with confidence. Especially with clients I do not know very well, I sometimes feel distrust. She also states that, since becoming a professional interpreter, it is no longer that easy for her to make deaf friends. Here, we may recall the Overlapping Circles or Rather an Onion  :  103

remarks of the DI in our respondent group who also raised the issue of trust: I tend to distinguish three groups. There is a group of deaf people I have never seen before and that I do not know. If I interpret for them, I have no problems maintaining a professional boundary. The second group consists of people I know from the Deaf organization. If I interpret for them, I can uphold that boundary too. The third group is the most difficult one. These are good friends that I am in regular contact with. If I interpret for them, we have to trust each other. If we meet up afterwards, we have to act as if nothing happened.  .  . I really need to be aware of where my professional boundary is and safeguard it. (SLI8) In general, however, Flemish professional interpreters as well as members of the Deaf community find it important that interpreters continue to be in contact with deaf people and that they actively engage in the deaf social life. This, besides knowledge of Flemish Sign Language, is of the utmost importance to be accepted in the Flemish Deaf community. Whether this also means that hearing interpreters who have learned VGT later in life are really seen as “belonging” to the Deaf community is a topic on which opinions differ. What is striking is that, although the attitude among members of the actual community itself is rather positive, the hearing interpreters have greater doubts about whether they “belong” to the Deaf community.7 This, of course, depends on how one defines “Deaf community.” Often, there is still a reference to deafness (i.e., the biological condition), although a few younger informants propose to talk about the “sign language community”: In the past, it was clear: the deaf organizations were the places where all deaf and hard of hearing people got together. But now, everything has evolved. What does “Deaf community” mean nowadays? For me, it is rather a sign language community. (D2) A second important outcome of this study is that the Flemish SLI does not hold one single position with regard to the Flemish Deaf community. This position is, in fact, different for every interpreter, something that is not very surprising because it also depends on the personality of the interpreter. More interesting is that an interpreter’s position vis-à-vis the Deaf community is not static, but changeable. Asked whether she considers 104 : Eline Devoldere and Myriam Vermeerbergen

herself a member of the Deaf community, one of the respondents gave a rather hesitant answer. Clarifying the reason for her hesitation, she said: I indicated “sometimes” since I have almost no contact with other deaf people outside my work lately. I think it is an honor to belong to the Deaf community, but since I spend so little time in it nowadays, I chose “sometimes,” although I actually mean “yes.” Do you see what I mean? (SLI3) Thus, the same interpreter can position him- or herself closer to—or even within—the Deaf community at a certain point in time and a bit further away from it at other times. This is related to the attitude of the individual interpreter during certain periods in his or her life. And then, of course, there is also the matter of being accepted by the community. We would like to conclude with the feedback we received from an interpreter who did not participate in this research but who started thinking about her own position after having read our study: Whether I feel like a member of the Deaf community or not is often based on why I am there and who I am there with. People’s attitudes towards me can even shift in a second. When I introduce myself to people I have not met and say that I am not deaf but that I am an interpreter, they are friendly but will not always be so eager to engage in further conversation. However, this changes when I add that my husband is deaf; there is a more open attitude towards me then. When he is actually present, it is even more so. Mentioning that my children are Codas and that they can sign also often has the same effect. It seems that my marriage and motherhood almost “legitimizes” my membership and presence there. Also, I have several deaf female friends with whom I have a really good relationship, but once we are in a group with other deaf people, they tend to socialize a lot more with their deaf peers. The bond and history they have together is one I will simply never share. In conclusion, it seems fair to say that the issue of the position of Flemish SLIs vis-à-vis the Flemish Deaf community is a complex one and one that is in a constant state of change.

Notes

1. Actually, the first training program was already established in 1979 in Mechelen, but it took a while before this program was officially recognized. Overlapping Circles or Rather an Onion  :  105

2. As is probably the case in many countries, Codas in Flanders are a very diverse group when it comes to signed language proficiency and contact with Deaf community members. In the past, most “interpreting” was done by Codas, but clearly not all Codas were capable of interpreting. 3. The wording used in the Dutch questions is “behoren tot,” which may be translated in English as “belong to” or “be a member of.” In Flemish Sign Language, this can be translated as member of or deaf^community who in. Some deaf informants also used the sign welcome (e.g., welcome in deaf^community). 4. Most informants first filled out the questionnaire and were interviewed after. However, some of the older deaf informants had difficulties completing the Dutch questionnaire. In these cases, the questionnaire and the interview were combined. 5. Are all deaf and hard of hearing persons in Flanders members of the Deaf community? 6. In this respect, we also refer to the study by Lena Vaes (2016) about the “additional relationship” that can exist between an interpreter and his or her client. 7. Of course, we have to remember that this is an explorative study with a limited number of informants.

References

Bienvenu, MJ. (1987). The Third Culture: Working together. Journal of Interpreting, 4, 1–12. Cokely, D. (2005). Shifting positionality: A critical examination of the turning point in the relationship of interpreters and the Deaf community. In M. Marschark, R. Peterson, & E. A. Winston (Eds.), Sign language interpreting and interpreter education (pp. 3–28). Oxford, United Kingdom: Oxford University Press. De Meulder, M. (2008). De Vlaamse dovengemeenschap. In M. Vermeerbergen & M. Van Herreweghe (Eds.), Wat (geweest/gewenst) is. Organisaties van en voor doven in Vlaanderen bevraagd over 10 thema’s (pp. 41–71). Ghent, Belgium: Academia Press/Fevlado-Diversus. Devoldere, E. (2016). De plaats van tolken Vlaamse Gebarentaal t.o.v. de Vlaamse Dovengemeenschap (Master’s dissertation). KU Leuven, Campus Antwerpen, Belgium. De Weerdt, D., & De Meulder, M. (2007). Gebarentaligen—de discussie geopend. Dovennieuws, 82(4), 13–15. Kusters, A., De Meulder, M., & O’Brien, D. (2017). Innovations in Deaf studies: Critically mapping the field. In A. Kusters, M. De Meulder, & D. O’Brien 106 : Eline Devoldere and Myriam Vermeerbergen

(Eds.), Innovations in deaf studies: The role of deaf scholars (pp. 1–53). Oxford, United Kingdom: Oxford University Press. Leeson, L., & Vermeerbergen, M. (2010). Sign language interpreting and translating. In Y. Gambier & L. van Doorslaer (Eds.), Handbook of translation studies. Volume 1 (pp. 324–329). Amsterdam, the Netherlands: John Benjamins Publishing Company. Loots, G., Devisé, I., Lichtert, G., Hoebrechts, N., Van De Ginste, C., & De Bruyne, I. (2003). De gemeenschap van doven en slechthorenden in Vlaanderen. Communicatie, taal en verwachtingen omtrent maatschappelijke toeganke­ lijkheid. Ghent, Belgium: Cultuur voor Doven. Mindess, A. (2006). Reading between the signs: Intercultural communication for sign language interpreters (2nd ed.). Yarmouth, MN: Intercultural Press Inc. Napier, J., & Leeson, L. (2016). Sign language in action. London, United Kingdom: Palgrave Macmillan. Vaes, L. (2016). Tolken met gevoel. Een bijkomende relatie tussen een tolk en zijn/haar Dove client (Master’s dissertation). KU Leuven, Campus Antwerpen, Belgium. Van Herreweghe, M., & Vermeerbergen, M. (2006). Deaf signers in Flanders and 25 years of community interpreting. In E. Hertog & B. van der Veer (Eds.), Taking stock: Research and methodology in community interpreting (pp. 293–308). Antwerp, Belgium: Linguistica Antverpiensia, New Series. Vermeerbergen, M., & De Weerdt, K. (2018, May). Deaf interpreters in Flanders, Belgium: A long past, but a short history. Paper presented at the International Conference on Non-Professional Interpreting, Stellenbosch, South Africa. Vermeerbergen, M., & Russell, D. (2017). Interview with Myriam Vermeerbergen: Flemish Sign Language. International Journal of Interpreter Education, 9(1), 61–65.

Overlapping Circles or Rather an Onion  :  107

Chapter 6 Striking a Cognitive Balance: Processing Time in Auslan-to-English Simultaneous Interpreting Jihong Wang

Simultaneous interpreting is a highly complex cognitive activity (Timarová et al., 2014; Wang, 2016) in which the interpreter is engaged in concurrent tasks such as comprehending the source language input, transferring messages across languages, storing and processing information, producing target language information, and monitoring the target language output. The interpreter’s processing time (also called time lag, lag time, décalage, or ear-voice span) in simultaneous interpreting refers to the temporal delay between the source language input and the corresponding target language output (Cokely, 1986; Timarová, Dragsted, & Hansen, 2011). Interpreters’ onset time lag, a widely used metric in simultaneous interpreting research, refers to the interval between the beginning of a source language sentence and the start of its target language interpretation. Time lag reflects the temporal characteristics of processing (Pöchhacker, 2004), the segmentation of the source language input into chunks (Barik, 1975/2002; Pöchhacker, 2004; Podhajská, 2008), cognitive load (Treisman, 1965), cognitive processing (Timarová, 2015), processing speed (Timarová et al., 2011), and the minimum time that the interpreter needs for processing the source language information so as to produce a meaningful interpretation (Lee, 2006). An interpreter’s ability to work with time lag partly determines the quality of simultaneous interpreting performance (Podhajská, 2008). Despite a long tradition of measuring time lag in spoken language simultaneous interpreting, there has been little empirical research (e.g., Cokely, 1986) on time lag in signed language simultaneous interpreting. To narrow the gap, this study aimed to explore professional Auslan (Australian Sign Language)–English interpreters’ processing time in Auslan-to-English simultaneous interpreting, focusing on their time lag in 108

interpreting two kinds of Auslan sentences: those with numbers near or at the end and those ending with negation. Overview

This section provides an overview of the relevant literature on time lag in spoken and signed language simultaneous interpreting, concentrating on investigations into the length of time lag, factors influencing time lag, and an optimal time lag. Many studies have measured interpreters’ time lag to establish its average length and temporal range. There is a broad consensus that interpreters’ average time lag is approximately 1 to 5 seconds (see Barik, 1973; Cokely, 1986; Defrancq, 2015; Díaz-Galaz, Padilla, & Bajo, 2015; Gerver, 1969/2002; Kim, 2005; Lee, 2002, 2006; Podhajská, 2008; Timarová et al., 2011; Treisman, 1965). This range seems to apply to simultaneous interpreting involving different language pairs, interpreting modalities, interpreters with varying levels of professional experience, various text types, and varied conditions under which simultaneous interpreting was performed (Díaz-Galaz et al., 2015; Lee, 2006; Pöchhacker, 2004; Timarová et al., 2011). There has been solid evidence that time lag varies considerably between interpreters (Defrancq, 2015; Lamberger-Felber, 2001; Timarová et al., 2011, 2014). Moreover, each interpreter’s time lag fluctuates throughout the same source language speech (Timarová et al., 2011). These findings indicate that interpreters’ time lag is determined by a mix of external and internal factors. Some external factors (e.g., task type, source text presentation rate, text type, language combination) have been found to influence interpreters’ time lag in simultaneous interpreting, as discussed later in this chapter. Time lag in simultaneous interpreting is significantly longer than that in shadowing (Gerver, 1969/2002; Timarová et al., 2011; Treisman, 1965). As source text presentation rate increases, the interpreter lags further and further behind the speaker (Gerver, 1969/2002). Speaker variables, such as the length of source language sentences, words per minute, and pauses, affected professional interpreters’ time lag in English-to-Korean simultaneous interpreting (Lee, 2002). In addition, interpreting students’ time lag in simultaneous interpreting of a speech written beforehand was significantly shorter than that for a spontaneous speech (Podhajská, 2008). Although difficult source language segments Striking a Cognitive Balance  :  109

significantly prolonged interpreting students’ time lag, they did not significantly affect professional interpreters’ time lag (Díaz-Galaz et al., 2015). Furthermore, time lag is shorter when the source language and the target language are syntactically similar than when they are markedly different (Goldman-Eisler, 1972/2002; Kim, 2005). Moreover, numbers, dates, and acronyms require a significantly shorter time lag than some other source language elements such as verbs and redundant information (Díaz-Galaz et al., 2015; Timarová et al., 2011). Internal factors (e.g., interpreters’ prior preparation, interpreting experience, interpreting strategies) also influence interpreters’ time lag in simultaneous interpreting. Advance preparation for simultaneous interpreting tasks significantly reduced professional interpreters’ and interpreting students’ time lag (Díaz-Galaz et al., 2015; Lee, 2006). Díaz-Galaz et al. (2015) found no significant differences in time lag between professional interpreters and interpreting students. However, Timarová et al. (2014) found a significant, negative, and moderate correlation between professional interpreters’ interpreting experience (in days) and median time lag, indicating that more experienced interpreters tend to use a shorter time lag possibly due to their faster processing of source language information. Moreover, professional interpreters who are better able to shift attention between different tasks tend to keep a shorter time lag in simultaneous interpreting (Timarová et al., 2014). Interpreters use specific time lags in combination with certain interpreting strategies to perform well on simultaneous interpreting of particular language pairs (Kim, 2005; Lee, 2002). Adjusting one’s time lag is a paramount interpreting strategy in simultaneous interpreting (Best, Napier, Carmichael, & Pouliot, 2016; Gile, 2009; McKee & Napier, 2002; Podhajská, 2008). An interpreter needs to use an appropriate time lag at a specific point in simultaneous interpreting (Barik, 1973; Cokely, 1986; Lee, 2002). What is an ideal time lag in a specific situation? How does one strike the right balance between interpreting as soon as possible to avoid cognitive overload and waiting long enough to understand and process sufficient source language messages so as to produce meaningful interpretations? An onset time lag longer than 4 seconds and a tail-to-tail span (the interval between the end of a source language sentence and the end of its interpretation) longer than 5 seconds affected professional interpreters’ English-to-Korean (second language-to-first language) simultaneous interpreting performance (Lee, 2002, 2003). When interpreters’ time lag was long, they spoke fast in order to catch up with the speaker 110 : Jihong Wang

(Lee, 2002). Similarly, Lamberger-Felber (2001) found that experienced interpreters’ time lag that was longer than the average value resulted in long omissions (i.e., omissions of more than 15 words in the source text). In addition, Barik (1973) found a positive correlation between interpreters’ time lag and their total number of omissions. Timarová et al. (2014) found significant, negative correlations between professional interpreters’ time lag and their success in interpreting complex source language sentences, numbers, and double negation. Taken together, these results indicate that a shorter time lag is associated with higher accuracy in simultaneous interpretation. However, in his analyses of four certified interpreters’ English-­ to-American Sign Language simultaneous interpretation, Cokely (1986) found that two interpreters who had an average onset time lag of 2 seconds made more omissions, additions, substitutions, intrusions, and anomalies than the other two interpreters who had an average onset time lag of 4 seconds, indicating an inverse relationship between the length of time lag and the number of interpretation miscues. Cokely explained that those interpreters with a longer time lag had more time to analyze source language information and formulate acceptable target language renditions. It is worth noting that Lee’s (2002, 2003) and Cokely’s (1986) selection of 2 seconds and 4 seconds of time lag to group participants was arbitrary. Similarly, McKee and Napier (2002) examined interpreters’ strategies in English-to-International Sign simultaneous interpreting and found that participants typically worked with an extended time lag (usually between 10 and 16 seconds and, at times, even more) in order to maximize the effective analysis of source language messages, the reformulation of conceptually equivalent target language messages, and the use of contextual information. These interpreters not only stretched the upper limits of their storage and processing capacity (working memory capacity), but also appeared to be consciously aware of how much processing time they needed to be able to produce a meaningful interpretation. These interpreters also adjusted their time lag to cope with syntactical differences between the source language and the target language. In addition, in a corpus-based study of French-to-Dutch simultaneous interpreting, Defrancq (2015) found that short (2 seconds) and very short (1 second) time lags of professional interpreters were significantly more frequent in contexts where cognate translations (target language items that were phonetically similar to source language items) occurred than elsewhere, indicating that interpreters’ overly short time lags seem to Striking a Cognitive Balance  :  111

result in surface-level processing of the source text. Taken together, these findings highlight the importance of using an appropriate time lag at a particular moment during simultaneous interpreting and indicate an intricate relationship between time lag, language combination, and accuracy of simultaneous interpretation. In summary, previous findings indicate that time lag is an interesting and promising metric that offers insights into the cognitive processing in simultaneous interpreting (Timarová et al., 2011). It should be noted that methodological differences and weaknesses in the previous studies may constrain the generalizability of their results. Most of the previous studies focused on quantitative analyses of time lag, rather than qualitative analyses of time lag in relation to accuracy and interpreting strategies. There is a critical research gap regarding time lag in signed language simultaneous interpreting, and the study presented here aimed to bridge this gap by exploring professional interpreters’ time lag in Auslan-to-English simultaneous interpreting.

Research Methods

Participants Participants were 30 professional-level Auslan–English interpreters, including 14 native signers and 16 nonnative signers. Their pseudonyms and educational details can be found in Appendix 6.1 (for further information about their demographic profiles and the research methods, see Wang [2016]). Materials A deaf native Auslan signer provided an Auslan source text by giving a signed presentation entitled “Deaf People and Human Rights” for a mock national conference. He started with a 3-minute introduction, paused for a while, and then gave a 17-minute formal presentation. The researcher and a professional Auslan–English interpreter (a hearing native signer) served as the deaf signer’s audience. The researcher filmed the deaf signer’s Auslan presentation using a Flip UltraHD camera, creating an mp4 video. Due to logistical constraints, the video of the Auslan presentation only contained the deaf signer signing, without showing his PowerPoint slides in the background. 112 : Jihong Wang

Procedure To encourage prior preparation, the following materials were distributed to participants before data collection: task instructions, a glossary, and PowerPoint slides for the Auslan presentation. Each participant was filmed simultaneously interpreting the Auslan video shown on a computer. The filming focused on the Auslan presentation but still captured each participant’s English interpretation. Data Analysis Given the large dataset, two Auslan segments, containing 11 Auslan sentences (Table 6.1), were selected for data analysis. The first segment included four sentences containing numbers (sentences 1–4); the second segment consisted of five sentences that ended with negation (sentences 5, 7, 8, 10, and 11) and two sentences without such features (sentences 6 and 9). Participants’ interpretation videos were imported into ELAN, a computer software program for annotating video data. The beginning of these Auslan sentences and their English interpretations was identified to measure interpreters’ onset time lag. In addition, the time when an Auslan sentence finished and the time when its English interpretation ended were identified to calculate interpreters’ tail-to-tail span. Time lag was represented as the number of seconds the interpreter lagged behind the signer. Time lag was measured even if an Auslan sentence was partially interpreted. Auslan sentences that were entirely omitted were treated as missing values. Simultaneous interpreting performance was assessed by the researcher on the sentence level. An English interpretation of an Auslan sentence was deemed accurate if the essential propositional meaning (i.e., who does what to whom, or subject-verb-object) was faithfully conveyed. Results and Discussion

This section presents quantitative results of interpreters’ time lag before using representative examples to illustrate how their time lag was closely related to effective interpreting strategies. Findings are discussed in relation to previous literature. Substantial Variability in Time Lag Table 6.2 displays the mean, median, and range values of interpreters’ time lag for each of the 11 Auslan sentences. As indicated by the mean Striking a Cognitive Balance  :  113

(standard deviation) and range values, there was high variability among interpreters in terms of both onset time lag and tail-to-tail span. There was also substantial variability in terms of both onset time lag and Table 6.1. Eleven Auslan Sentences Selected for Time Lag Measurement. Auslan sentence

Syntactical features

Auslan sentences

1

Number near the end

bilingual have nod plus have other altogether countries have twenty-three countries have that Literal English translation: There were 23 countries that reported having a bilingual education option alongside other approaches.

2

Number near the end

other group say have total communication plus other methods education thirty-five countries Literal English translation: Another group of countries, numbering 35, described having a total communication approach with other methods of education available.

3

Number at the end

those only total communication speak-sign altogether thirty-one Literal English translation: Thirty-one respondents replied that their country had a total communication approach only, which incorporates speaking and signing at the same time.

4

Number at the end

5

Negation at the end

other group only oral sign ban five Literal English translation: A further five countries reported an oral only approach that bans signed language. s-o altogether research talk many many block (prevent) deaf people no few countries those put-down for deaf people access to education government or perception equal same same other people Literal English translation: So, in summary, the research showed that only in a few countries were deaf people shunned in their societies, and that their situation was not good in terms of access to education and government and also in terms of how well deaf people were seen as equal to other people. (continued)

Table 6.1. (continued) Auslan sentence

Syntactical features

6

7

Negation at the end

8

Negation at the end

9

10

Negation at the end

11

Negation at the end

Auslan sentences b-u-t have what l-a-c-k o-f recognition approve sign language also l-a-c-k o-f bilingual education Literal English translation: But, there was a lack of recognition of signed languages. Also, there was a lack of bilingual education. many many many have sign language interpreting services? not really Literal English translation: Not many countries reported having signed language interpreting services. people community know about deaf people? no Literal English translation: The hearing community was not very aware of the situation of deaf people. means many those deaf people their lives can access t-o services really oppressed Literal English translation: This means that many of those deaf people were really oppressed when accessing services. so those deaf people true enjoy enjoy enjoy equal human rights? no Literal English translation: So, deaf people do not yet enjoy the full and equal human rights. so summarize that say deaf people equal like other round world have have have? no Literal English translation: So, in closing, deaf people the world over are not yet equal to their hearing counterparts.

Note. Sentences #1–4 were adjacent. Sentences #5–11 were adjacent.

tail-to-tail span across different measurement points for each interpreter (i.e., fluctuation of time lag throughout the Auslan source text for each interpreter). These findings are consistent with previous research (e.g., Timarová et al., 2011). Onset Time Lag and the Accuracy of Interpretation Table 6.3 shows the results of independent samples t-tests comparing accurate interpretations with inaccurate interpretations in terms of onset

30 30 30 30 30 29 29 28 26 30 29

1 2 3 4 5 6 7 8 9 10 11

4.04 (1.97) 3.71 (2.28) 4.27 (1.60) 3.43 (1.02) 6.68 (3.98) 4.63 (2.74) 4.07 (2.35) 3.85 (1.91) 4.75 (2.21) 4.54 (1.83) 4.43 (2.55)

Mean (SD) 3.83 3.00 3.90 3.36 5.92 4.48 3.88 4.41 4.65 4.71 3.63

Median 0.67–8.17 0.66–9.10 2.06–7.76 1.56–5.30 0.95–15.40 0.03–13.50 1.35–11.98 0.80–7.42 1.73–9.00 1.50–9.95 1.08–10.08

Range

Note. N refers to the number of interpreters who interpreted a particular Auslan sentence.

N

Auslan sentence

Onset time lag (seconds)

30 30 30 30 29 29 28 28 26 30 29

N

Table 6.2. Descriptive Statistics for Interpreters’ Onset Time Lag and Tail-to-Tail Span.

2.44 (2.13) 2.10 (1.45) 1.54 (1.42) 2.97 (1.61) 3.25 (3.12) 2.21 (2.56) 2.16 (2.18) 3.99 (2.01) 3.04 (2.20) 3.42 (2.92) 2.41 (1.74)

Mean (SD)

1.99 2.17 1.54 3.18 3.04 1.43 1.72 3.47 2.73 2.81 1.94

Median

Tail-to-tail span (seconds)

-1.43–8.60 -0.41–5.93 -0.62–5.07 -0.31–8.56 -0.60–13.59 0.34–11.00 -0.42–8.65 1.04–10.19 -0.37–10.27 -1.24–10.43 -0.14–5.91

Range

Table 6.3. Mean Onset Time Lag (Seconds) for Accurate Interpretations Versus That for Inaccurate Interpretations. Accurate interpretations

Sentence

N

Mean onset time lag (SD)

1 2 3 4 5 6 7 8 9 10 11

4 14 17 24 17 18 4 20 17 14 10

3.68 (2.35) 3.93 (2.20) 3.78 (1.42) 3.47 (0.97) 7.62 (3.97) 3.83 (1.90) 5.04 (0.66) 4.21 (2.01) 4.53 (2.24) 5.28 (1.77) 4.18 (2.06)

Inaccurate interpretations

N

Mean onset time lag (SD)

t

p

η2

26 16 13 6 13 11 25 8 9 16 19

4.09 (1.96) 3.52 (2.40) 4.92 (1.64) 3.28 (1.27) 5.46 (3.81) 5.95 (3.44) 3.91 (2.50) 2.97 (1.38) 5.15 (2.23) 3.90 (1.68) 4.55 (2.82)

-0.38 0.48 -2.05 0.40 1.50 -2.14 0.89 1.60 -0.67 2.20 -0.37

0.71 0.63 0.05 0.69 0.14 0.041 0.38 0.12 0.51 0.036 0.72

0.13 0.15 0.15 -

Note. N in the “Accurate interpretation” column refers to the number of accurate interpretations. All p values were two-tailed. The bold type indicated p ≤ 0.05, i.e., significant difference. The effect size η2 (0.01 = small effect, 0.06 = moderate effect, 0.14 = large effect) was calculated only when there was a significant difference.

time lag. For all but three Auslan sentences (sentences 3, 6, and 10), interpreters’ onset time lag had no significant impact on the accuracy of English interpretations. Note that both the small sample size and the unbalanced numbers of accurate and inaccurate interpretations might have caused these results. These findings did not show a clear relationship between interpreters’ onset time lag and their accuracy of interpretation. These findings contradict previous findings of clear-cut relations between interpreters’ time lag and their simultaneous interpreting performance (e.g., Barik, 1973; Cokely, 1986; Lee, 2002; Timarová et al., 2014). This discrepancy is not surprising because, apart from interpreters’ onset time lag, many other factors may have an impact on the accuracy of simultaneous interpretation (Best et al., 2016; Gile, 2009; Kim, 2005), such as interpreters’ proficiency in the source language and target language; simultaneous interpreting experience; prior preparation; comprehension of source language messages; familiarity with the topic, the signer, and the context; use of effective interpreting strategies; and cooperation between the signer and the interpreter. Further research is required in this area. Striking a Cognitive Balance  :  117

Relation of Time Lag to Effective Interpreting Strategies

Auslan Sentences with Numbers Of the only three interpreters (Liz, Amber, and Jane) who correctly interpreted all four Auslan sentences containing numbers (sentences 1–4), Liz’s time lag pattern and interpreting strategies differed from Amber’s and Jane’s (Amber and Jane were similar in both aspects), as illustrated by Example 6.1. Example 6.1: Time Lag and Interpreting Strategies in Relation to Numbers

Auslan Sentences: Sentence 1: bilingual have nod plus have other altogether countries have twenty-three countries have that Sentence 2: other group say have total communication plus other methods education thirty-five countries Sentence 3: those only total communication speak-sign altogether thirty-one Sentence 4: other group only oral sign ban five Literal English Translation: 1.  There were 23 countries that reported having a bilingual education option alongside other approaches. 2.  Another group of countries, numbering 35, described having a total communication approach with other methods of education available. 3.  Thirty-one respondents replied that their country had a total communication approach only, which incorporates speaking and signing at the same time. 4.  Five countries reported an oral only approach, banning signed language. Liz’s Interpretation: 1.  [onset time lag: 0.67 second; this interpretation started when he was signing bilingual in sentence 1] We wanted to look at countries that have bilingual education as well as other education. So (pause: 2.1 seconds), 23 countries have bilingual 118 : Jihong Wang

education as one of, the other methods of, several methods of education [tail-to-tail span: 8.6 seconds; this interpretation ended when he was signing countries in sentence 2]. 2.  [9.1 seconds; this interpretation started when he was signing those in sentence 3] 35 have total communication as one of several approaches [3.96 seconds; this interpretation ended when he was signing speak in sentence 3]. 3.  [5.16 seconds; this interpretation started when he was signing thirty-one in sentence 3] 31 countries have total communication as the only method of communi education [2.84 seconds; this interpretation ended when he was signing sign in sentence 4]. 4.  [3.76 seconds; this interpretation started when he was signing five in sentence 4] And 5 countries have the, have oral education as the only method of communication in education [3.89 seconds]. Amber’s Interpretation: 1.  [4.02 seconds; this interpretation started when he was signing altogether in sentence 1] Some countries have a bilingual education among other options for educations, that was 23 countries [2.53 seconds; this interpretation ended when he was signing have in sentence 2]. 2.  [4.19 seconds; this interpretation started when he was signing plus in sentence 2] Other countries say total communication as well as other methods of education, that was 35 countries [2.92 seconds; this interpretation ended when he was signing total communication in sentence 3]. 3.  [3.33 seconds; this interpretation started when he was signing speak in sentence 3] And the countries that responded saying that “only total communication was used” was 31 countries [2.51 seconds; this interpretation ended when he was signing oral in sentence 4]. 4.  [4.2 seconds; this interpretation started when he was signing five in sentence 4] And those using an oral method only without sign language at all was 5 countries [3.47 seconds]. Starting with a short time lag (0.67 second), Liz (a nonnative signer with 9 years of interpreting experience) introduced the topic of Auslan sentence 1 by using a “pat phrase” (“talking without actually saying much,” Nilsson, 2016, p. 34): “We wanted to look at countries that . . . education.” She then uttered the number of these countries (“23”) and Striking a Cognitive Balance  :  119

repeated this topic, with the redundancy resulting in her long tail-to-tail span (8.6 seconds, almost the duration of the entire sentence 2). Consequently, when she started to interpret sentence 2, the signer had begun producing the next Auslan sentence. Being aware that she was lagging quite far behind the deaf signer (onset time lag for sentence 2 was 9.1 seconds), she switched to a different interpreting strategy by conveying the meaning of sentence 2 concisely and idiomatically by saying the number first (“35”) and then elaborating on the concept (“have total communication as one of several approaches”). This succinct English rendition bought her time and enabled her to catch up with the deaf signer. As soon as she saw the number thirty-one (i.e., the final sign of sentence 3), she began her interpretation by saying the number (“31”) and then the concept (“countries have total communication as the only method of communication education”). Probably realizing that these Auslan sentences shared the same structure (i.e., concept first and then number), as soon as she saw the number five, which was the final sign of sentence 4, she started to render the English interpretation by saying the number and then the concept. Note that at the beginning of three Auslan sentences (2, 3, and 4) Liz lagged behind the signer by almost a whole Auslan sentence. It was observed that she constantly checked the printed PowerPoint slides on the table when interpreting these Auslan sentences to ensure high accuracy of interpretation. Her long time lag enabled her to accurately reformulate the meaning of the Auslan source text into idiomatic English (i.e., specifying the number before introducing the particular countries). Liz’s extended time lag and effective interpreting strategy corroborate McKee and Napier’s (2002) observation that some interpreters who have highly developed storage and processing skills work with very long time lag in order to maximize effective message analysis and reformulation of a conceptually equivalent target language message. Variations in Liz’s time lag throughout these four Auslan sentences support McKee and Napier’s (2002) view that interpreters not only consciously monitor their time lag, but also adjust their time lag to cope with syntactical differences between the source language and the target language. Amber’s time lag was generally shorter than Liz’s. Unlike Liz, Amber consistently established the concept first by using indefinite phrases (e.g., “Some countries . . .”) and then specified the number (e.g., “23 countries”) at the end of her English interpretations. By keeping a moderate onset time lag of 3 to 4.2 seconds (about half of an Auslan sentence), 120 : Jihong Wang

Amber followed the deaf signer’s train of thought by introducing the topic first—countries having what type(s) of education—and then finishing off the English interpretations with a specific number. This effective interpreting strategy prevented her from lagging exceptionally far behind the deaf signer when starting and finishing off her interpretations, thus avoiding cognitive overload. Amber’s time lag pattern and effective interpreting strategy lend credence to Nilsson’s (2016) and Taylor’s (2002) recommendation that interpreters sometimes can begin an interpretation by slowly saying an introductory sentence or a neutral phrase while still waiting for the speaker’s or signer’s essential idea, in order to avoid both unbearably long pauses in the interpretation and cognitive overload. The qualitative analyses just described demonstrate that interpreters’ time lag is closely linked to their choice of interpreting strategies (see Kim, 2005; Lee, 2002), that diverse patterns of time lag in association with appropriate interpreting strategies can still result in diverse but accurate interpretations, and that individual interpreters’ time lag is dynamic across time.

Auslan Sentences with End Negation An Auslan sentence or phrase may be negated by a headshake (sometimes accompanied by nonmanual features such as frowning, squinting, or pouting) or manual signs such as not, nothing, not-yet, and never (Johnston & Schembri, 2007). Detailed analyses of accurate interpretations of the five Auslan sentences with end negation (sentences 5, 7, 8, 10, and 11) revealed the following four types of effective interpreting strategies used by interpreters: 1.  Wait and start interpretation only when or after seeing the end negation 2.  Begin interpretation before seeing the end negation so as to convey some source language information, and then pause to receive and process further information (e.g., the end negation) before resuming the interpretation to convey the end negation appropriately 3.  Start interpretation before seeing the end negation in order to introduce the topic of the Auslan sentence, and then use appropriate syntactical structures to express the end negation near the end of the target language sentence Striking a Cognitive Balance  :  121

4.  Start interpretation before seeing the end negation, successfully anticipate the end negation, and then render it correctly into English before seeing it These effective interpreting strategies are illustrated by Examples 6.2 and 6.3. Example 6.2: Effective Interpreting Strategies 1 and 2 in Relation to End Negation

Auslan sentence 5: s-o altogether research talk many many block (prevent) deaf people no few countries those put-down for deaf people access to education government or perception equal same same other people Literal English translation: So, in summary, the research showed that only in a few countries were Deaf people shunned in their societies, and that their situation was not good in terms of access to education and government and also in terms of how well Deaf people were seen as equal to other people. Sophia’s interpretation: [9.5 seconds; this interpretation started when he was signing those in this Auslan sentence] So, there were only a few countries that said that their countries uh very much oppressed deaf people where there was no education, and where they weren’t seen as equal people. Most countries, deaf people felt that they were equal [4.84 seconds]. Shannon’s interpretation: [3.93 seconds; this interpretation started when he was pausing between talk and many] So, throughout the research (pause: 3.37 seconds), [resuming after seeing no few in this Auslan sentence] we must say that there were very few countries that were reluctant to provide services for deaf people (pause: 1.73 seconds), for example, denying access to government or education, or, ah, not being seen as an equal citizen. There were small numbers in that respect [4.47 seconds]. The Auslan sentence in Example 6.2 is unique: the first part (s-o . . . no) can be considered as a clause with end negation, and the second part (few countries . . . other people) can be regarded as another clause that reiterates the previous essential idea (not many countries prevent deaf people) and then expands on it in terms of three aspects 122 : Jihong Wang

(education, government services, and perceiving deaf people as equal to hearing people). Given the reiteration of the essential meaning, this source text was deemed one complex sentence rather than two simple sentences. Of 17 interpreters who correctly interpreted this Auslan sentence, 10 (59%; e.g., Sophia) started interpretation when or after seeing the sign no, and the other 7 (41%; e.g., Shannon) began interpretation before seeing it. Although both Sophia and Shannon provided highly accurate interpretations of this Auslan sentence, their time lags and effective interpreting strategies were different. Sophia had an extended onset time lag of 9.5 seconds. This is in line with McKee and Napier’s (2002) and Cokely’s (1986) argument that an interpreter’s extended time lag can allow both an in-depth analysis of sufficient source language messages and creation of equivalent, meaningful, and coherent target language renditions. To minimize cognitive overload, Sophia began her interpretation with the crucial topic that she had been waiting for—“only a few countries”—only expressing the aforementioned essential idea once, so as to save time, save cognitive resources, and keep up with the deaf signer. Given that extended time lag can lead to significant omissions (LambergerFelber, 2001; Lee, 2002), Sophia’s unjustifiable omission of government in this Auslan sentence may be attributable to her lagging quite far behind the deaf signer (i.e., her long onset time lag). Being aware of the deaf signer’s intent of emphasizing the previously mentioned key point, she reiterated it by using a short summary at the end of her English interpretation. Overall, Sophia employed interpreting strategy 1 (from the list given earlier). Compared with Sophia, Shannon used a moderate onset time lag of 3.93 seconds, making use of the deaf signer’s 1.8-second pause between talk and many to quickly finish processing the initial Auslan message (s-o altogether research talk). He then paused to wait for more Auslan meaning units and resumed his interpretation after seeing the negation. These actions indicate that he was consciously segmenting the Auslan source text in terms of meaning and constantly monitoring the length of his time lag. Interestingly, he paused again after conveying the aforementioned essential idea before expanding on its three relevant aspects (“for example, denying . . . citizen”). Shannon used interpreting strategy 2. He not only took advantage of the deaf signer’s pauses in order to promptly finish processing some chunks of source language information (see Barik, 1973), but also strategically used his own pauses Striking a Cognitive Balance  :  123

to pace his English interpretation and wait for more source language messages. His moderate time lag and strategic use of pauses indicate that he processed source language meaning units quite fast (see Timarová et al., 2014), in order to quickly release information stored in his working memory so that there would be room for storing upcoming new information. Like Sophia, Shannon also finished off his interpretation by reiterating the key point. Example 6.3: Effective Interpreting Strategies 3 and 4 in Relation to End Negation

Auslan sentence 11: so summarize say deaf people equal like other round world have have have? no Literal English translation: So, in closing, deaf people the world over are not yet equal to their hearing counterparts. Molly’s interpretation: [3.16 seconds; this interpretation started when he was signing people in this Auslan sentence] So, I guess what we saw was that (micro-pause), equality for deaf people (pause: 1.19 seconds), is not, has not been achieved yet [1.45 seconds]. Zoe’s interpretation: [2.1 seconds; this interpretation started when he was signing say in this Auslan sentence] So, in summary, what it states is that, deaf people have not become as equal as others in the wider community [2.3 seconds]. Regarding Example 6.3, among 10 participants who produced accurate interpretations of this Auslan sentence, nine (90%; e.g., Molly, Zoe) started interpreting the Auslan sentence before seeing the end negation, and only one began interpreting after seeing it. Starting with a moderate time lag, Molly used a “pat phrase” (“I guess what we saw was that”) and then paused briefly to buy time and receive more Auslan information. After seeing deaf people equal like other round world, she introduced this topic (“equality for deaf people”) and paused again to wait for more Auslan messages. During this pause, she saw the end negation and then used an appropriate syntactic structure (“has not been achieved yet”) to convey it at the end of her English interpretation. Molly adopted interpreting strategy 3. Zoe started her interpretation slightly earlier than Molly. Zoe’s first four phrases (“So, in summary, what it states is that, deaf people”) closely 124 : Jihong Wang

match the deaf signer’s first five signs (so summarize say deaf people). Interestingly, she was uttering “have not” while the deaf signer was signing round world, indicating that she successfully predicted the signer’s end negation. In other words, Zoe used interpreting strategy 4. She was saying “become” while he was signing no. Her successful anticipation of the end negation might be attributable to her thorough advance preparation, good grasp of the deaf signer’s key messages of the entire Auslan presentation, attention to the signer’s nonmanual features that indicated the upcoming end negation, and good knowledge of the context (i.e., there were four other Auslan sentences with end negation before Auslan sentence 11). Interestingly, Zoe also successfully anticipated the end negation of Auslan sentence 10.

A Long Time Lag Is Necessary When the End Negation Negates the First Sign Example 6.4 below illustrates that a long time lag is required when interpreters effectively process an Auslan sentence whose end negation negates its first sign. Example 6.4: A Long Time Lag Sometimes Is a Prerequisite for an Accurate Interpretation

Auslan sentence 7: many many many have sign language interpreting services? not really Literal English translation: Not many countries reported having signed language interpreting services. Amber’s interpretation: [5.65 seconds; this interpretation started when he was signing the second sign of the following Auslan sentence] a lack (micro-pause) of interpreting services [3.4 seconds] Emily’s interpretation: [5.39 seconds; this interpretation started when he was pausing after shaking his head to express not really] and a lack of sufficient sign language interpreting services [4.06 seconds] Molly’s interpretation: [4.14 seconds; this interpretation started when he was shaking his head to mean not really] not having many sign language interpreters [0.97 seconds] Striking a Cognitive Balance  :  125

Sam’s interpretation: [4.99 seconds; this interpretation started when he was shaking his head to indicate not really] So, sign language interpreter services are limited [2.49 seconds]. Auslan sentence 7 is particularly challenging for two reasons: (1) the negation is in the form of headshake accompanied by squinting and might not be recognized by some interpreters; and (2) the headshake negates the first sign of this Auslan sentence (many), requiring interpreters to wait for the end of the Auslan sentence in order to produce an accurate English interpretation. Only 4 (13%) of 30 participants (Amber, Emily, Molly, and Sam) correctly rendered this Auslan sentence into English. The commonality among them is that they waited and only started their English interpretations after seeing and understanding the deaf signer’s headshake at the end of the Auslan sentence. The other 26 participants either began their English interpretations too early (i.e., before seeing the signer’s headshake) or started their target language rendition after seeing the headshake but did not realize that it meant negation of the first sign many. Hence, both a sufficiently long time lag and correct understanding of subtle negation (e.g., headshake) are vital to an accurate interpretation of this type of signed language sentences.

Consequences of Exceptionally Short or Long Time Lags It has been observed that excessively short time lags can cause problems such as misunderstanding of the deaf signer’s messages, surface-level processing of the source text, source language intrusion in target language renditions, frequent false starts, overly literal interpretations, and ungrammatical or unidiomatic use of the target language. This finding is consistent with previous research (e.g., Cokely, 1986; Defrancq, 2015). Likewise, excessively long time lags were also problematic, causing significant omissions and inaccurate interpretations of the subsequent source language sentences. These findings corroborate Lee’s (2002, 2003) results. This study is limited by the fact that participants completed the Auslanto-English simultaneous interpreting task in an artificial setting where they interpreted a two-dimensional video of the deaf signer’s Auslan presentation, without PowerPoint slides projected in the background, without a team interpreter, and without a group of audience members. 126 : Jihong Wang

Conclusion

This study has investigated professional interpreters’ time lag in Auslanto-English simultaneous interpreting, focusing on two types of Auslan sentences: those containing numbers near or at the end and those ending with negation. There was high variability among participants in terms of both onset time lag and tail-to-tail span. For individual interpreters, time lag also varied considerably across various measurement points within the Auslan presentation. Results did not show a clear-cut relationship between interpreters’ onset time lag and the accuracy of Auslan-toEnglish simultaneous interpretation. Qualitative results revealed that interpreters’ time lag was closely related to their effective interpreting strategies, indicating that diverse time lags in combination with appropriate interpreting strategies can result in diverse but accurate interpretations. A long time lag is necessary for interpreting signed language sentences whose end negation negates their first sign. Excessively long or short time lags proved to be problematic. These findings have implications for interpreting training and interpreting practice. Interpreting students and practitioners need to realize that different time lag patterns can be used in combination with various effective interpreting strategies to create diverse but conceptually equivalent interpretations. In view of their own working memory capacity, they also need to consciously monitor and adjust their time lag to cope with specific challenges in simultaneous interpreting and strike a cognitive balance so as to produce coherent and faithful interpretations. Further research could compare interpreters’ time lag for signed-tospoken language simultaneous interpreting with that for the other language direction. It would be interesting to further explore the notion of an optimum time lag in relation to various language pairs and text types. Interpreters’ introspection on their management of time lag in simultaneous interpreting also merits further exploration. References

Barik, H. C. (1973). Simultaneous interpretation: Temporal and quantitative data. Language and Speech, 16(3), 237–270. Barik, H. C. (1975/2002). Simultaneous interpretation: Qualitative and linguistic data. In F. Pöchhacker & M. Shlesinger (Eds.), The interpreting studies reader (pp. 78–91). London, United Kingdom: Routledge. Striking a Cognitive Balance  :  127

Best, B., Napier, J., Carmichael, A., & Pouliot, O. (2016). From a Koine to Gestalt: Critical points and interpreter strategies in interpretation from International Sign into spoken English. In R. Rosenstock & J. Napier (Eds.), International sign: Linguistic, usage and status issues (pp. 136–166). Washington, DC: Gallaudet University Press. Cokely, D. (1986). The effects of lag time on interpreter errors. Sign Language Studies, 53(winter), 341–375. Defrancq, B. (2015). Corpus-based research into the preseumed effects of short EVS. Interpreting, 17(1), 26–45. Díaz-Galaz, S., Padilla, P., & Bajo, M. T. (2015). The role of advance preparation in simultaneous interpreting: A comparison of professional interpreters and interpreting students. Interpreting, 17(1), 1–25. Gerver, D. (1969/2002). The effects of source language presentation rate on the performance of simultaneous conference interpreters. In F. Pöchhacker & M. Shlesinger (Eds.), The interpreting studies reader (pp. 52–66). London, United Kingdom: Routledge. Gile, D. (2009). Basic concepts and models for interpreter and translator training (revised ed.). Philadelphia, PA: John Benjamins. Goldman-Eisler, F. (1972/2002). Segmentation of input in simultaneous translation. In F. Pöchhacker & M. Shlesinger (Eds.), The interpreting studies reader (pp. 69–76). London, United Kingdom: Routledge. Johnston, T., & Schembri, A. (2007). Australian Sign Language (Auslan): An introduction to sign language linguistics. Cambridge, United Kingdom: Cambridge University Press. Kim, H.-R. (2005). Linguistic characteristics and interpretation strategy based on EVS analysis of Korean-Chinese, Korean-Japanese interpretation. Meta, 50, 4. Retrieved from http://s3.amazonaws.com/zanran_storage/www.erudit.org/ ContentPages/27853488.pdf Lamberger-Felber, H. (2001). Text-oriented research into interpreting. Examples from a case study. Hermes, Journal of Linguistics, 26, 39–64. Lee, T. (2002). Ear voice span in English into Korean simultaneous interpretation. Meta, 47(4), 596–606. Lee, T. (2003). Tail-to-tail span: A new variable in conference interpreting research. Forum, 1(1), 41–62. Lee, T. (2006). A comparison of simultaneous interpretation and delayed simultaneous interpretation from English into Korean. Meta, 51(2), 202–214. McKee, R., & Napier, J. (2002). Interpreting into International Sign Pidgin: An analysis. Journal of Sign Language and Linguistics, 5(1), 27–54. Nilsson, A.-L. (2016). Interpreting from signed language into spoken language: The skills and knowledge needed to succeed. In A. Kalata-Zawłocka & B. van den Bogaerde (Eds.), To say or not to say—challenges of interpreting from

128 : Jihong Wang

sign language to spoken language. Proceedings of the 23rd efsli Conference in Warsaw, Poland, 11th–13th September 2015 (pp. 15–48). Brussels, Belgium: European Forum of Sign Language Interpreters. Pöchhacker, F. (2004). Introducing interpreting studies. London, United Kingdom: Routledge. Podhajská, K. (2008). Time lag in simultaneous interpretation from English into Czech and its dependence on text type. Folia Translatologica, 10, 87–110. Taylor, M. M. (2002). Interpretation skills: American Sign Language to English. Edmonton, Alberta, Canada: Interpreting Consolidated. Timarová, Š. (2015). Time lag. In F. Pöchhacker (Ed.), Routledge encyclopedia of interpreting studies (pp. 418–420). London, United Kingdom: Routledge. Timarová, Š., Cenková, I., Meylaerts, R., Hertog, E., Szmalec, A., & Duyck, W. (2014). Simultaneous interpreting and working memory executive control. Interpreting, 16(2), 139–168. Timarová, Š., Dragsted, B., & Hansen, I. G. (2011). Time lag in translation and interpreting: A methodological exploration. In C. Alvstad, A. Hild, & E. Tiselius (Eds.), Methods and strategies of process research: Integrative approaches in translation studies (pp. 121–146). Philadelphia, PA: John Benjamins. Treisman, A. M. (1965). The effects of redundancy and familiarity on translating and repeating back a foreign and a native language. British Journal of Psychology, 56(4), 369–379. Wang, J. (2016). The relationship between working memory capacity and simultaneous interpreting performance: A mixed methods study on professional Auslan/English interpreters. Interpreting, 18(1), 1–33.

Striking a Cognitive Balance  :  129

University Qualification

Undergraduate None Postgraduate None None None None

Postgraduate None

Pseudonym

Alex Amber Annie Charlie Cynthia Emily Lauren

Kay Monica

None A

None A A, C A None None A

Interpreter Education

Native Signers (N = 14)

25 12

23 3 15 3 27 14 3

Years of Interpreting Experience

Linda Liz

Bernie Claire Debbie Dorothy Helen Jane Jennifer

Pseudonym

Postgraduate Undergraduate

Undergraduate Postgraduate PhD Postgraduate Postgraduate PhD Undergraduate

University Qualification

A, C A

A None None A, C A A, C None

Interpreter Education

Nonnative Signers (N = 16)

Pseudonyms and Educational/Professional Details of Participants

Appendix 6.1

6 9

20 25 27 15 21 5 4

Years of Interpreting Experience

Postgraduate None Postgraduate None Postgraduate

None A, B A, C None C

23 15 14 22 24

Mary Miranda Molly Sabrina Sophia Vicky Wendy

Postgraduate Postgraduate Postgraduate None None None Postgraduate

C A, C A, C A A A, B A, C

13 8 18 9 10 11 18

Note. A: Diploma of Interpreting (Auslan/English); B: Advanced Diploma of Interpreting (Auslan/English); C: Postgraduate Diploma in Auslan–English Interpreting or equivalent.

Rachael Sam Shannon Tiffany Zoe

Chapter 7 Examining the Acoustic Prosodic Features of American Sign Language-to-English Interpreting Sanyukta Jaiswal, Eric Klein, Brenda Nicodemus, and Brenda Seal

The act of interpreting between American Sign Language (ASL) and English occurs every day in a variety of settings across the United States. Signed language interpreters provide services for situations as diverse as mental health counseling, wedding receptions, hospice care, classroom instruction, court proceedings, and business transactions. Whatever the setting, the overarching goal of signed language interpreters is to render messages that are faithful to the expressed meaning across languages. Message fidelity between languages may be expressed at the lexical and syntactic level, but is also rendered through prosodic information (e.g., intonation, stress, pausing). Prosody carries integral information about the semantic, morphological, and syntactic aspects of a language, as well as conveying the emotional intent of a message. When people engage in direct communication, they have control over their own congruency of prosodic features; however, this control shifts in an interpreted interaction. When communication is interpreted, the interlocutors (or communicating participants) receive prosodic information secondhand, through the linguistic output of the interpreter. Prosodic expression in signed language interpretation requires additional consideration because of the differences in language modalities (sign-speech). Speakers and signers express prosody in distinct phonological cues that may not be understood by the interlocutors. Accurately conveying affective information is especially critical in situations in which communicators also do not have visual access to one another, such as when communicating through video relay services (VRS). In VRS, the interlocutors do not have auditory or visual access to one another, so they must heavily rely on 132

the accurate expression of visual and vocal prosodics by the interpreters. Both Deaf and hearing interlocutors have no assurance of emotional equivalence in their interpreters’ voice or signs. In this study, we specifically examine interpreters’ expression of prosodic cues that are vocal. For a variety of reasons, interpreters may inadvertently alter the prosody expressed between interlocutors and, as a result, either under- or misrepresent the intentions of the source message. Interpreters may focus primarily on delivering the lexical and syntactic content of the message and fail to give sufficient attention to the prosodic features that affect the communication intent. Interpreters may also attenuate the emotional tone of their interpretations for fear of overdoing or misrepresenting the intent of the speaker or signer. Still other interpreters may underestimate the linguistic import of prosody on the meaning of a message. Whatever the reason, by skewing the prosodics of a source text, interpreters are at risk of not conveying a critical component of a signer’s message when the meaning is dependent upon prosody. Literature focused on interpreter fidelity—equivalence, accuracy, and faithfulness in representing both the prosodics of a message’s intent and its linguistic content––is prominent in the fields of business communication and foreign language interpretation (Gile, 2009, 2017). An examination of prosody is also gaining prominence in litigation and evidentiary translations (Chakhachiro, 2016; Salaets & Balogh, 2015). Attention has been given to the importance of prosodic features when interpreting across spoken languages (Ahrens, 2005; Christodoulides & Lenglet, 2014; Shlesinger, 1994). Yet, the literature on signed language prosody, including rate, pausing, and signing space (Nespor & Sandler, 1999; Nicodemus, 2009; Sandler, 1999a, 1999b); the historical and ethical tenets of sign language interpreting (Moody, 2011; Stewart & WitterMerithew, 2006); and a large literature on spoken language prosody (e.g., Raphael, Borden, & Harris, 2007) have not been translated for interpreting from ASL into English. To date, no objective measures have been made to determine whether the meaning encoded in ASL signs and sentences carries a vocally prosodic match when decoded into English words and sentences. In this investigation, we report on a first-of-its-kind experiment to measure the vocal prosody of ASL-to-spoken English interpreting. The experimental procedures, laboratory measures, and comparative samples of eight professional interpreters’ vocal prosodics should open an important line of inquiry in assuring ASL-to-English fidelity. Acoustic Prosodic Features of ASL-to-English Interpreting  :  133

Method

This descriptive investigation involved recording and analyzing individual interpreters’ acoustic prosodic measures while interpreting videorecorded ASL signers presenting emotionally flat technical content and emotionally rich narrative content into spoken English. Participants Eight professional interpreters, four females (mean age, 43.5 years; range, 36–52 years) and four males (mean age, 44.25 years; range, 34–60 years), were scheduled as participants for 1 hour at the Voice and Speech Physiology Lab at Gallaudet University. Criteria for inclusion in the study were as follows: (1) holding national certification1 from the Registry of Interpreters for the Deaf; (2) currently working as a professional interpreter (i.e., interpreting 15 or more hours weekly); and (3) having a minimum of 1 year of interpreting experience in postsecondary education settings. All participants were hearing and had either English or ASL as their first language. Participants were excluded if they had any current or past history of vocal pathology, including any surgical treatments, or if spoken English was not their native language. Participants were recruited personally and through email following institutional review board approval. Participants signed a consent indicating willingness to participate in the study, and they received a small compensation for their time. Materials The stimuli for ASL-to-spoken English translation consisted of four 3-minute ASL video clips selected from the ASL video archive library at Gallaudet University and from ASL vlog websites (e.g., www.ASLized .com, www.youtube.com). An initial scanning of the database for these videos involved a broad search for ASL signers demonstrating technical content (with perceived flat prosody) and ASL signers offering narrative content (with perceived dynamic prosody). Six videos were preselected and presented to three Deaf native ASL signers to independently view and rate on a 7-point semantic differential scale (Al-Hindawe, 1996). The descriptors on the semantic differential scale were designed to depict the spectrum from “emotionally flat” to “emotionally rich” content 134 : Jaiswal, Klein, Nicodemus, and Seal

(e.g., indifferent to emotional, calm to animated, reserved to enthusiastic, even-tempered to dynamic, neutral to expressive). Ratings were summed across the three viewers and ranked to yield the final selections: two videos of individual ASL signers representing “technical content and emotionally flat prosody” and two videos of individual ASL signers representing “narrative content and emotionally rich prosody.” Procedures Each interpreter was scheduled to participate in the research lab free of acoustic and visual noise. The interpreters were audio-recorded in several activities in English, including a brief interview with standard questions, two oral readings, and interpreting the two videos of emotionally flat ASL signers and the two videos of emotionally rich ASL signers. These different activities yielded the following: (1) a baseline voice sample during conversational speaking, (2) a baseline voice sample during oral readings of emotionally flat (“The Rainbow Passage”) and emotionally rich (“Goldilocks and the Three Bears”) English texts, and (3) an interpreting voice that captured their ASL-to-spoken English interpretation. The baseline recordings (interview and oral reading) served as comparison recordings for each participant’s vocal prosody outside the interpretation task (Zellner-Keller, 2005). Participants were fitted with an omnidirectional condenser micro-­ lavalier microphone (Shure SM93) placed at a constant distance from the mouth (approximately 5–6 cm) during all recordings (Titze, 1994). The microphone signal was directed into Channel 1 of the Computerized Speech Lab (Pentax Medical, CSL Model 4500), a specialized acoustic recording hardware system that was connected to a desktop computer on a cart. The audio signals were recorded at a sampling rate of 44,100 Hz using the Multispeech module of the CSL software. An additional portable recorder with dual microphones (TASCAM DR-40 Linear PCM Recorder) was also set up approximately 1 meter from the speaker for reliability and backup. Recording equipment was calibrated for frequency and intensity prior to each recording session. In addition, the gain on CSL was adjusted for each speaker to prevent any clipping or overamplification of the signal. Before watching the ASL videos, a baseline acoustic recording was made during responses to two questions (“In 45 to 60 seconds, please describe your professional interpreting experience” and “Tell me about the most memorable or funniest thing that ever Acoustic Prosodic Features of ASL-to-English Interpreting  :  135

happened to you while interpreting”). The participants also completed two read-aloud tasks to serve as a baseline for nonspontaneous vocal prosody. For the interpreting task, four 3-minute ASL video clips were presented via a laptop computer (Apple MacBook Pro), with the monitor at eye level and a distance of about 1 meter from the participant. The videos were each presented twice, first for familiarity with the content and second for the interpretation into spoken English. The four videos were presented in random order to each participant. The investigator provided a brief description about each video prior to its presentation but did not cue the participants regarding the use of their prosody in the study. At the completion of the interpreting task, participants were asked some wrapup questions to assess their familiarity with the videos or the signers, their self-evaluation of performance, and their choice of vocal prosody. Data Analysis The recorded spoken English interpreting samples, speaking samples, and reading samples were captured, time stamped, and annotated for content, including word, phrase, and sentence segmentations. Acoustic analysis was done using Praat, a tool for large-scale automated systematic prosody analysis, including fundamental frequency (F0) and semitones (ST) or the distance between two adjacent frequencies or tones, variability from average fundamental frequency (standard deviations and range from minimum to maximum frequency), and contour data for graphical displays.

Results

The analyses presented here involve only the sign-to-voice interpreting samples as quantitative data and representative qualitative data from the interpreters’ comments following their interpretations. Table 7.1 presents results of the three measures of vocal dynamics of the prosodic output from the participants’ four sign-to-voice interpreted English videos: (1) mean fundamental frequency over the length of the video (F0 contributes to the perception of a speaker’s pitch); (2) standard deviations from the mean fundamental frequency, representing variability in the interpreter’s overall range (e.g., smaller standard deviations 136 : Jaiswal, Klein, Nicodemus, and Seal

110 89 122 113 189 196 225 172

M1 M2 M3 M4 F1 F2 F3 F4

22.2 10.9 18.2 12.9 27.1 28.2 39.7 36.2

SD

7 4 6 4 5 5 6 7

ST

Variability

114 92 121 117 191 205 211 170

Hz

Mean

25.4 16.9 18.5 16.9 26 33.5 41.7 29.3

SD

Variability

Flat Video B

8 6 5 5 5 6 7 6

ST 135 106 132 117 201 208 229 184

Hz

Mean

46.1 25.1 27.7 21.3 41.8 36.5 39.9 47

SD

Variability

Rich Video C

12 8 7 6 7 6 6 9

ST

123 153 131 134 190 210 245 197

Hz

Mean

30.5 115.6 25.4 33 29.9 61.8 52.2 53.6

SD

9 34 7 9 6 11 8 10

ST

Variability

Rich Video D

Note. F, female; M, male; SD, standard deviation; ST, semitones represent the difference between two adjacent frequencies or tones, as on a piano keyboard.

Hz

Mean

ID No.

Flat Video A

Table 7.1. Acoustic Measures of Vocal Prosody While Interpreting American Sign Language to English.

reflect less frequency change or less vocal inflection); and (3) range of fundamental frequency (reflecting the extremes of the interpreter’s low to high pitch) during the interpreting tasks, or minimum to maximum differential (measured in semitones, representing perceived variability). The larger the interpreter’s standard deviations from mean fundamental frequency and the larger the range from minimum to maximum fundamental frequency in the interpreter’s acoustic sample, the greater was the interpreter’s vocal prosodic variability. As shown in Table 7.1, the fundamental frequencies of the four male interpreters (top half) are relatively similar and low, as are the higher fundamental frequencies (bottom half) for the female interpreters. The fundamental frequencies, standard deviations, and ranges for each interpreter’s “flat” videos are comparably different from those of the emotionally “rich” videos for most interpreters, as represented in Table 7.1. Exceptions are bolded, as shown for male 4, female 2, and female 3, with less distinction across their three measures in interpreting one flat and one rich tape than represented in the other 26 interpretations. Figure 7.1 represents the acoustic contour of the frequency signal in an interpreter’s comparative voice, graphically representing the contrast between vocal prosody for the “flat” ASL signers and vocal prosody for the “rich” ASL signers. Figure 7.2 represents the standard deviations analyzed collectively for the males and females. As shown here, the female interpreters offered 500

F0 Hz

100 0

3

6 sec

F0 (Hz) contour across 6 sec of interpreting a prosodically flat signer

0

3

6 sec

F0 (Hz) contour across 6 sec of interpreting a prosodically rich signer

Figure 7.1.  Vocal contour during American Sign Language-to-English interpretations. 138 : Jaiswal, Klein, Nicodemus, and Seal

more variability in their prosodic match of the emotionally rich ASL signers than the male interpreters (when measured in SD but not ST). At the end of each interpreter’s recorded samples, we asked them to comment on their “satisfaction,” particularly on their sense of vocal change while interpreting the contrasting “flat” and “rich” ASL signers. Table 7.2 offers a sample of comments from their recorded reviews. The eight comments are mixed in their sequenced order such that comment 1 does not represent male 1, comment 5 does not represent female 1, and so forth.

Discussion

The importance of accurately interpreting from a signed language into a spoken language has been shown to be crucial to the confidence and trust that Deaf leaders have in interpreters (Haug et al., 2017). Discussions frequently surround the value of accurately expressing content and intent equivalence, but to date, research has not explored prosodic equivalence beyond personal or subjective judgments. The relevance of the research presented in this work is that we offer, for the first time, evidence of how

Averaged SD (Hz) between categories 50 45

Pitch variability in Hz

40 35 30 Flat Videos

25

Rich Videos

20 15 10 5 0

All males

All females

Figure 7.2.  Standard deviations (SDs; variability) averaged across males and females. Acoustic Prosodic Features of ASL-to-English Interpreting  :  139

Table 7.2. Interpreters’ Postinterpreting Comments, Randomly Sequenced. 1.  [I] didn’t confidently portray ___ as flat. It [the signer] could be considered monotone, but it was just ASL to me. 2.  ____ is very animated, very visual—lack of concrete nouns, so that adds to the processing; I probably didn’t think so much about prosody with this one. 3.  ____ had a huge prosody—I know I didn’t portray that: I chose to portray it more as a flat recount of a story, rather than “this is happening in the now and it’s so crazy that it’s happening.” 4.  I adjust more for the audience, the environment, not necessarily per signer’s personality because I don’t know them. 5.  All totally fine… ___ was super exaggerated. 6.  I know ___. She is very animated and excited about what she talks about, so that probably affected me. This is only signer [I] was familiar with, explaining [my] highest variability with her. 7.  I didn’t like interpreting ___ and ___ very much—these two were flattest, [but I] bombed ___ ... I would turn down … university-related jobs… knowing this. 8.  Good!

objective measures of interpreter fidelity or vocal equivalence of affect and emotion represented visually in ASL can be operationalized, mea­ sured, and analyzed. We tested eight professional ASL–English interpreters, including four males and four females. We analyzed their recorded spoken language interpretations for average fundamental frequency that represents pitch and for standard deviations from the mean and minimum-­ to-maximum range to represent variability in pitch needed for perceptual vocal prosodic variability. Analyses of the data revealed expected variability in the mean, standard deviations, and ranges of fundamental frequency/STs required for emotionally flat and emotionally rich ASL in most, but not all, of the 32 interpreting samples. We offer several main findings from our research: 1.  As expected, the vocal expressions of male interpreters and female interpreters are different, as noted in lower fundamental frequencies/STs for the males and higher fundamental frequencies/STs for the females. These differences are anatomical, with vocal fold vibrations of men, on average, about 100 cycles per second or Hertz (Hz) lower than female vocal fold vibrations for the production of speech. Because of these differences, male interpreters are often preferentially paired with Deaf male 140 : Jaiswal, Klein, Nicodemus, and Seal

ASL signers, so as to offer a voice that is perceptually different from a female interpreter’s voice for hearing listeners; likewise, female interpreters are often a preferred match for Deaf female signers. We do not presume to argue for or against gendermatching of interpreters in this discussion, beyond stating that in remote interpreting assignments, interpreters may be perceived by unaware listeners to represent a male or female signer simply because of their different fundamental frequencies/ semitones. 2.  In contrast to expectations, the four male interpreters in our pilot sample offered larger pitch ranges in their spoken language interpretations than the four female interpreters. Traditionally, women speakers are perceived as being more vocally expressive than men, a bias that may result in scheduling interpreters by gender for assignments in which affect is critical to interpreting fidelity. This assumption was supported when measuring standard deviation, though the males in our study collectively demonstrated a larger range (ST) in their vocal prosodics than the females, whose vocal range tended to be smaller. However, outliers appeared across several of the data, as shown in Table 7.1, suggesting a third main finding. 3.  Professional interpreters appear to vary in their individual skills when vocally representing ASL prosody in spoken English. Analysis of 32 interpretations revealed six samples across the male and female interpreters in which the ASL prosody and spoken English prosodic congruence could be questioned. We pass no comparative judgment on the talented interpreters who served as participants, other than to suggest that attenuating a dynamic signer’s emotional richness with a flat spoken English interpretation is equally as incongruent as accentuating a flat ASL signer’s monotone sign communication with an enriched spoken English interpretation. In both cases, hearing and Deaf consumers may rightfully question, if aware, the lack of fidelity in the interpreted message. 4.  Comments offered in Table 7.2 suggest that the ASL–English interpreters were highly aware of their vocal prosodics. When asked if their output “matched” the ASL, the interpreters addressed two levels of match—one representing content and the other representing intent. As implied in these qualitative Acoustic Prosodic Features of ASL-to-English Interpreting  :  141

statements, interpreter fidelity or equivalence requires a duality of “listening”—listening with the eyes (and brain) for content of the source language and listening with the eyes (and brain) for the “felt” intent of the communicator. In addition, as one interpreter pointed out, content may override intent when interpreters are new to an assignment, suggesting that interpreters work for a match of content before they achieve a match of intent. Although this might be questioned in interpreter training, we would suggest that intent is bound to content, so prioritizing content is logical. At the same time, achieving interpreter fidelity requires attention to both. Limitations Several limitations became apparent during this investigation and during the analysis process that are important to disclose for replication and future investigations. One involves the interpreting task itself. The video recordings used in the research were cautiously chosen, following a process of reviewing multiple tapes with native ASL viewers who acted as raters using a semantic-differential scale of contrasts from “indifferent to emotional, calm to animated, reserved to enthusiastic,” and so on. The video clips were brief, about 3 minutes long, and we offered each interpreter a preview of each tape to become accustomed to the signer’s style. We make no claim that 3 minutes of reviewing a signer’s 3 minutes of recorded message enable sufficient readiness to “register” a prosodic tone. It is possible that different interpreters would have been more prepared with two or three replays of a signer’s tape. We might reconsider the review time to allow interpreters to determine their own readiness to provide a spoken interpretation for a signer. Another nuance attached to the selected video clips and our interpreter participants involved familiarity, commonly treated as a “testing” effect. When asked if they had seen any of the videos before or knew the ASL signers, the interpreters responded with a mixture of yes and no. As offered in comment 6 (see Table 7.2), one interpreter indicated knowing one of the signers: “I know [signer] . . . is very animated and excited . . . so that probably affected me.” This is not an uncommon experience in the interpreting world. Regardless, the degree of familiarity or unfamiliarity of the ASL signers also impacts the need for review

142 : Jaiswal, Klein, Nicodemus, and Seal

time. It can also skew expectations of a dynamic emotionally rich ASL signer when the platform calls for a technically flat presentation or a flat ASL presentation when a signer is actually more emotionally dynamic. Another testing effect was also possible, as suggested by one of the interpreters who stated she or he typically adjusts affect for the audience and the environment (see comment 4 in Table 7.2). Our laboratory environment offered no audience beyond the second author, the research student who met with, interacted with, instructed, and recorded the interpreters’ voices in the different conditions. A lab environment, by its very design, is a sterile environment for interpreting research, but it is also a controlled environment important for recording vocal samples needed for this type of research. Future researchers may want to test interpreters with a contrived or simulated audience of hearing individuals in an envisioned simulated setting (e.g., “You’re interpreting to an audience of newly enrolled undergraduates during orientation week”). Expanding the research to compare interpreters’ “lab” voices with their “field” voices recorded during real assignments could also inform future ASL-to-English prosodic research. Another limitation in our procedures involves a possible order effect. Table 7.1 does not represent the order in which the interpreters viewed the four ASL signing tapes. The chance of having a “rich” tape first, then another “rich” tape, followed by two “flat” tapes, was simply random. Randomizing order was intentional in reducing chances of an order effect. One interpreter, however, a female who offered more of an emotionally rich interpretation for a flat ASL signer (see comment 1 in Table 7.2), indicated that her interpreting was not as flat (as it should have been) when the ASL signer was actually more “monotone.” If the random selection impacted the other seven interpreters’ performances, then questions about the order in which tapes are played should be asked. Eliminating an order effect is indeed important for research like this when individual statistics are analyzed for group trends, but in at least one interpreter’s case, the earlier emotionally rich tape(s), like another interpreter’s familiarity with the signer, could have influenced an animated register that continued to skew the prosody of the interpreter’s voice away from the desired ASL prosodic match. As with any new research endeavor, many of the methodological limitations described here are not anticipated until they are experienced.

Acoustic Prosodic Features of ASL-to-English Interpreting  :  143

For that reason, we refer to these research findings as “pilot” findings. We anticipate that they can guide or direct new research endeavors. We close with the following encouragement. This is an important line of research that could and should offer many opportunities for future graduate students and a line of inquiry that faculty in training programs at the associate’s, bachelor’s, and master’s levels should find helpful. In today’s climate of encouraged interprofessional collaboration across university departments (e.g., interpreting, counseling, Deaf studies, ASL, speech-language pathology, psychology, linguistics), we suggest that this line of research is one that university administrators and research funding agencies are likely to welcome. Our collaborative work across the SpeechLanguage Pathology Program (Department of Hearing, Speech, and Language Sciences) and the Department of Interpretation and Translation at Gallaudet University was exciting. We offer it as a model for future endeavors that combine professions, departments, professors, and students, not only because it is important, but also because it is professionally very satisfying.

Acknowledgments

Eric Klein, a May 2017 graduate of the Speech-Language Pathology Program at Gallaudet University, completed this research as part of his master’s thesis under the chair of Dr. Jaiswal. Dr. Seal and Dr. Nicodemus served as committee members for his research. Eric had also previously worked with Dr. Nicodemus when she was a research scientist at San Diego State University. Dr. Seal and Dr. Jaiswal are especially appreciative of the enthusiasm both Eric and Dr. Nicodemus demonstrated for this research project, an idea that had been germinating between Dr. Seal and Dr. Jaiswal for several years prior to its happening. We also extend thanks to the professional interpreters who gracefully agreed to participate in Eric’s thesis research, and to the reviewers of the ASL tapes for their important contributions to the project.

Note

1. Registry of Interpreters for the Deaf certification included the National Interpreter Certification (NIC), NIC Advanced, or NIC Master. 144 : Jaiswal, Klein, Nicodemus, and Seal

References

Ahrens, B. (2005). Prosodic phenomena in simultaneous interpreting: A conceptual approach and its practical application. Interpreting, 7(1), 51–76. Al-Hindawe, J. (1996). Considerations when constructing a semantic differential scale. La Trobe Working Papers in Linguistics, 9, 41–58. Chakhachiro, R. (2016). Contribution of prosodic and paralinguistic cues to the translation of evidentiary audio recordings. International Journal of Translation and Interpreting, 8(2), 46–63. Christodoulides, G., & Lenglet, C. (2014). Prosodic correlates of perceived quality and fluency in simultaneous interpreting. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Social and linguistic speech prosody: Proceedings of the 7th International Conference on Speech Prosody (pp. 1002–1006). Dublin, Ireland. Retrieved from https://www.academia.edu/7130429/Prosodic_ correlates_of_perceived_quality_and_fluency_in_simultaneous_interpreting Gile, D. (2009). Fidelity in interpreting and translation. In D. Gile (Ed.), Basic concepts and models for interpreter and translator training (pp. 52–74). Amsterdam, the Netherlands: John Benjamins. Gile, D. (2017). Variability in the perception of fidelity in simultaneous interpretation. Hermes, 22, 51–79. Haug, T., Bontempo, K., Leeson, L., Napier, J., Nicodemus, B., van den Bogaerde, B., & Vermeerbergen, M. (2017). Deaf leaders’ strategies for working with sign language interpreters: An examination across seven countries. Across Languages and Cultures, 18(1), 107–131. Moody, B. (2011). What is a faithful interpretation? Journal of Interpretation, 21, 1–51. Nespor, M., & Sandler, W. (1999). Prosody in Israeli Sign Language. Language and Speech, 42(2–3), 143–176. Nicodemus, B. (2009). Prosodic markers and utterance boundaries in American Sign Language interpretation. Washington, DC: Gallaudet University Press. Raphael, L. J., Borden, G. J., & Harris, K. S. (2007).  Speech science primer: Physiology, acoustics, and perception of speech. Philadelphia, PA: Lippincott Williams &Wilkins.   Salaets, H., & Balogh, K. (2015). Development of reliable evaluation tools in legal interpreting: A test case. International Journal of Translation and Interpretation, 7(3), 103–120. DOI: 10.12807/ti.107203.2015.a08 Sandler, W. (1999a). The medium is the message: Prosodic interpretation of linguistic content in sign language. Sign Language and Linguistics, 2(2), 187–215. Sandler, W. (1999b). Prosody in two natural language modalities. Language and Speech, 42(2–3), 127–142. Shlesinger, M. (1994). Intonation in the production and perception of simultaneous interpretation. In S. Lambert & B. Moser-Mercer (Eds.), Bridging the gap: Acoustic Prosodic Features of ASL-to-English Interpreting  :  145

Empirical research in simultaneous interpretation (pp. 225–236). Amsterdam, the Netherlands: John Benjamins. Stewart, K. M., & Witter-Merithew, A. (2006). Dimensions of ethical decision-­ making: A guided exploration for interpreters. Burtonsville, MD: Sign Media, Inc. Titze, I. (1994). Toward standards in acoustic analysis of voice.  Journal of Voice, 8(1), 1–7. Zellner-Keller, B. (2005). Speech prosody, voice quality and personality. Logopedics Phoniatrics Vocology, 2005(30), 72–78.

146 : Jaiswal, Klein, Nicodemus, and Seal

Chapter 8 Reframing the Role of the Interpreter in a Technological Environment Erica Alley

The emergence of video communication technology, specifically video relay service (VRS), has altered how deaf people can access interpreters as well as how the American Sign Language (ASL)–English interpreters in the United States understand and approach their work. Communication between deaf people and interpreters, which had once taken place in shared space, is now available at a distance using two-dimensional video streaming. This is a dramatic change from the face-to-face communicative interactions of only 25 years ago. While the VRS industry in the United States is rapidly growing, little is known about the daily work of VRS interpreters due to the tightly regulated environment in which VRS exists. Specifically, in accordance with rules set forth by the Americans With Disabilities Act (ADA; 1990), interpreters in a telecommunication environment are prohibited from keeping records of call content (i.e., recorded video and written notes) outside the duration of a call. Similarly, following regulations enacted by the Federal Communications Commission (FCC), the observation of VRS calls is forbidden in order to respect the confidentiality of conversational participants. Although the rules set forth by both the ADA and the FCC effectively achieve their intended goal of securing and maintaining the confidentiality of the callers, they are a barrier to the collection of data for research purposes, resulting in most research being based on qualitative data (e.g., interviews, surveys) or simulated call content. Restrictions are placed on many elements of the interpreter’s decisionmaking process, including those decisions made within a call as well as those made outside of a call, when the interpreter is not yet connected to a hearing or deaf caller. Rules govern the speed at which interpreters take calls, the type of calls that they accept, how (and when) they team with 147

other interpreters, and the way that they interact with both deaf and hearing callers. There are three reasons for this limited authority: (1) the nature of VRS technology, (2) federal regulation, and (3) corporate policies that emphasize profit. As a result, VRS interpreters do not engage in their work in the same way as ASL–English interpreters do in other contexts. Presently, the lack of decision-making power associated with the work of VRS interpreters in video-mediated settings closely aligns with the restricted and often scripted actions of call center employees (e.g., technology support, product sales, billing). This is drastically different than the autonomy demonstrated by interpreters when working in face-to-face settings within their local Deaf community. Brophy (2011) argued that within a call center environment, employees are largely interchangeable because management controls the planning, organization, review, and record keeping associated with labor. The removal of decision-making ability leads employees to blindly perform routine, deskilled tasks (Brophy, 2011). Call centers provide an example of a workplace that combines customer service with capitalism, creating an environment in which the employee is restricted by corporate policy, resulting in decreased decision latitude. Applying a customer service framework, this study investigates the decision making of interpreters in a corporate VRS environment. Data were obtained via in-depth interviews conducted with 20 ASL–English interpreters experienced with working in the VRS setting. The interview data were analyzed for patterns (e.g., topic, vocabulary, interpreter actions) using the constant comparative method (Charmaz, 2001). The findings highlight a call center customer service framework influencing interpreters’ decision-making practices. Specifically, the decisions interpreters make in a VRS environment often exhibit similar customer orientation behaviors as those described by Rafaeli, Ziklik, and Doucet (2008), including: (1) anticipating customer requests, (2) educating customers, (3) offering personalized information, (4) providing emotional support, and (5) offering explanations or justification. Interpreters’ implementation of customer orientation behaviors suggests a shift in the way ASL–English interpreters frame their role in this new technological environment.

VRS and Call Center Work

Nearly four decades after the establishment of the Registry of Interpreters for the Deaf (RID), streamed video communication technology became 148 : Erica Alley

available, resulting in a transformation in the practices of ASL–English interpreters. Early experiments in VRS were conducted on a small scale in Texas in 1995 and in Arizona in the late 1990s and early 2000s; however, it was not until one corporation, Sorenson Communications, began distributing free videophones to deaf people in 2003 that VRS became widespread. Access to videophone technology led increasing numbers of deaf people to use VRS for everyday telecommunication with family members, for work, and for personal business. The explosion of video technology instantaneously changed the framework in which interpreting services were provided to deaf people, including the organization and oversight of interpreting service provision. VRS is a federally funded, for-profit industry with rules and regulations that constrain the autonomy of interpreters who work within the VRS environment. Specifically, interpreters working within a VRS environment are referred to as communications assistants (CAs) in federal documents. This change in title also coincides with a different set of expectations for the work performed by these individuals. Traditionally, signed language interpreters in the United States have operated in a highly independent manner and, for better or worse, there was little oversight of the interpreters’ decisions and actions. In this way, the degree of professional autonomy for interpreters was high, especially in legally protected situations, such as the judiciary, where civil rights legislation offered protections for minority language users and placed interpreters in a defined role in the court (Witter-Merithew, Johnson, & Nicodemus, 2010). With the formation of VRS, questions have arisen regarding the degree of autonomy that interpreters have in this highly automated and bureaucratic environment. Sandstrom (2007) suggested that professional autonomy is “threatened by the rise of rationalization and bureaucracies, which supplants individual decision making” (p. 99); however, the relatively new VRS environment has only been minimally examined for its impact on the decision-making powers and autonomous actions of interpreters (Alley, 2019). Constraints on the actions of CAs within a call center environment may resemble the work performed by people in commercial call centers, an environment highly controlled by scripts and productivity measures. Brophy (2011) referred to call center work as highly automated “cognitive capitalism” (p. 410). The reports of CAs working in VRS call centers suggest a similar experience to that of other call center employees. For example, CAs are connected to callers randomly through an automated Interpreter in a Technological Environment  :  149

system that shares minimal information about the callers (e.g., chosen username, phone number). The CA is then tasked with predicting potential dialogue based on context in order to construct meaning. For example, a 1-800 phone number for an outbound call often indicates calling a place of business. With this in mind, the interpreter adjusts their predictions to include the potential goals of the caller (e.g., billing, tech support, customer service). A call placed to a doctor’s office most likely will lead to scheduling an appointment, getting test results, or discussing symptoms. In light of this evidence, Peterson (2011) argued that the work that occurs in the VRS environment does not qualify as interpreting at all, but rather as the “educated guessing” of call center employees (p. 203). In addition, the success of the interpreter’s work is not measured by the successful interpretation of the call; rather, the employer uses quantitative measures of efficiency (e.g., speed of answer, percentage of time working with a team) to determine effectiveness. This distinction further removes VRS work from our historical schema of interpreting that occurs in the community. Video streaming technology has dramatically influenced the manner in which deaf people access telecommunication and has shaped how interpreting services are provided. VRS has impacted the relationship between deaf people and interpreters. One of the changes that has occurred since the advent of VRS was the establishment of a for-profit model for the provision of interpreting services. This model strays from the current frameworks considered to describe the relational work of interpreters. For example, advancing frameworks argue for increased autonomous decision making of signed language interpreters. Llewellyn-Jones and Lee (2013) stated that the work of interpreters should be perceived through a three-dimensional model that is not governed by rules, but rather by “role-space” (p. 56) composed of three axes: Participant/Conversational Alignment, Interaction Management, and Presentation of Self. In their model, Llewellyn-Jones and Lee state that social interactions are fluid and interpreters need to respond to interactions as “human beings with wellhoned social skills, sensitivities, and awareness” (p. 70). Although the perception of the role of the interpreter is moving away from the machine model, current policies associated with work in VRS do not reflect this evolution. For example, in the community, interpreters may decide not to accept a job interpreting for a family member due to personal conflict. Similarly, they may not accept a job pertaining to a topic with which they do not feel qualified (e.g., legal) or with which they 150 : Erica Alley

may have a strong contrasting opinion (e.g., politics, religion); however, for the largest VRS provider, the CA does not have the ability to reject a VRS call but, rather, is encouraged to call for a team for support. Anecdotally, interpreters report being advised to answer all calls and let the deaf person decide if they would like a different interpreter. Limited professional autonomy is not unique to the work of interpreters in VRS. Sznelwar, Mascia, Zilbovicious, and Arbix (1999) described the nature of call center work: The task assigned to operators is designed based on the principle that answering calls is a simple activity that can always be standardized and strictly controlled. Worker control and worker initiated direction, as instruments that could improve the quality of the work, including better customer satisfaction, are not part of the organizational design concerns. Work is fragmented and segmented, limiting the possibility for operators to act and solve problems. (pp. 293–294) In low-wage, high-turnover environments that are typical of call centers, human decision making is perceived as a threat to the standardization of work and, ultimately, the opportunity for profit. In this approach, rationalization and bureaucracies supersede individual decision making (Sandstrom, 2007). Further, in an effort to minimize the threat of individual thought, the autonomy of employees working in various fields is restricted. These constraints are often framed by management as being “necessary and inevitable” for the success of the company (Braverman, 1998, p. 97).

Customer Service

Although customer service is often viewed from a business standpoint, it should also be examined as a social interaction (Bolton & Houlihan, 2005). As customers, we interact with service providers on a daily basis both in person (e.g., servers in restaurants, bank tellers) as well as from a distance via telephone communication (e.g., insurance agents, store representatives). Bolton and Houlihan (2005) investigated interactions between customers and those who work in a call center environment. They described the stance of customers in three different ways. In their interactions with call center representatives, customers may be viewed as mythical sovereigns who want a service as quickly as possible, functional Interpreter in a Technological Environment  :  151

transactants who want to carry out a transaction simply and efficiently, or moral agents who engage in communication and recognize the interaction as socially relevant (Bolton & Houlihan, 2005). Bolton and Houlihan argued that people cannot be categorized definitively since their stance may change depending on the particular situation. Similar to customers, employees who work in a call center are actors in these social interactions. Rafaeli et al. (2008) investigated customer orientation behaviors, which are defined as instances in which employees offered customers assistance that was not explicitly requested but that might serve to resolve the problem for which the customer had called. Specifically, Rafaeli et al. (2008) identified five customer orientation behaviors used by employees in a bank call center: (1) anticipating customer requests (i.e., offering information prior to the customer explicitly asking for it), (2) offering explanations or justifications (i.e., explaining the procedure that will take place to fulfill the customer’s needs), (3) educating the customer (i.e., teaching the customer company terminology or procedure so that the customer can handle future calls), (4) providing emotional support (i.e., giving positive or supportive emotional statements used to build rapport or show empathy), and (5) offering personalized information (i.e., providing customer-specific information to the caller). Rafaeli et al. (2008) suggested that the presence of these customer orientation behaviors led to a greater customer perception of service quality. The five customer orientation behaviors identified by Rafaeli et al. (2008) can be applied to the decision-making process and actions of CAs in the VRS environment. Participants in this study repeatedly stated that their decisions were influenced by a desire to provide quality customer service to callers. In this section, I highlight ways in which the decisions made by CAs were influenced by the goal of customer service as well as the ways that their actions align with the list of customer orientation behaviors identified by Rafaeli et al. (2008). The term customer service is typically used in businesses that operate in mainstream society. Whether or not customer service is the best term to describe the actions of CAs in VRS settings is arguable. Interpreters bring a much different connection to deaf people than workers in other businesses; indeed, interpreters generally are highly aware of the social inequities that deaf people experience on a daily basis. Thus, although the participants used the term customer service to justify or describe their behaviors, their actions may be driven by deeper motivations that are 152 : Erica Alley

based on knowledge of the social conditions of deaf people who exist within a larger society of hearing people. Anticipating Customer Needs At the core of customer service is the ability to identify customer needs and resolve concerns with expediency. For example, suppose a customer recently noticed on their credit card statement that their favorite coffee shop charged them twice for their morning cup of joe. They remember the card being swiped twice due to problems with the shop’s machine but did not think anything of it at the time. They call their credit card company and explain the problem. The representative assures them that this is somewhat of a routine matter and if they simply file a dispute this can all get taken care of easily. Without prompting by the customer, the representative explains where to find the form and how to proceed with filling it out. The representative ends with a short explanation of the next steps that will be taken on their end and when the customer should hear back from the bank regarding the solution to this problem. Anticipating customer needs, according to Rafaeli et al. (2008), means that the representative responds to questions that have not yet been asked, anticipating that these questions will arise given the current situation. In the previously described situation, the caller had not yet asked where to find the form or how they would be contacted when a resolution to this problem had been reached. The representative anticipated the customer’s needs and responded to the customer without being prompted. As demonstrated through interviews, similar behaviors can be seen in the work of CAs in VRS. For example, Emily,1 who has been working in VRS for 5 years, shares how she often responds to situations when deaf callers seem distraught when they reach an automated system used by businesses to route calls to the appropriate representatives (i.e., phone trees). She says: What I have done is I usually say, “do you want me to say representative?” or if there’s options and none of the options fit and I usually say, “want zero? operator, zero?” That’s what my approach has been, and it seems to work pretty well. Emily knows that a common work-around for a phone tree is to push zero or to say “representative” on a machine that responds to vocal prompts. Often, but not always, this will result in being routed to a Interpreter in a Technological Environment  :  153

representative. Emily perceives that the customer may benefit from this information and shares it without being prompted. Similarly, Bella, who has been working in a VRS setting for 10 years, explains how she may add information within her interpretation that did not originate directly from the deaf participant in an effort to provide customer service. These additions may be considered an anticipation of a customer need, as defined by Rafaeli et al. (2008). For example, often when customers call cable providers, the customers will be asked for information that is located on the television, such as an error message displayed on the screen. Similarly, if they are calling their Internet provider, they may be asked for an IP address located on the router. A hearing person may bring their cordless phone with them as they search for this information. A deaf person, however, may need to leave the room where their videophone is located in order to obtain this information. Bella shares that, in these circumstances, she may add information to the conversation in order to keep the hearing person on the line. She says: I will explain. “You know this is gonna take a little bit longer and thank you so much for your patience.” I thank that hearing person a lot, where the deaf person’s not, but I am, but it sounds like the deaf person is. Bella uses an interpreter-generated utterance (one that did not originate from the conversational participants) to inform the hearing caller that it may take a moment to retrieve this information. This action successfully signals to the hearing participant that the call is on hold and they might hear a period of silence during this process. This action creates a footing shift. Footing was originally defined by Goffman (1981) as “the alignment we take up to ourselves and the others present as expressed in the way we manage the production or reception of an utterance” (p. 128). The notion of footing shifts was later applied to interpreted interactions that take place during face-to-face encounters (see Metzger, 1999; Roy, 2000), as well as those that take place through VRS (see Marks, 2015). In an exploration of footing shifts of interpreters within VRS, Marks (2015) notes that interpreters are aware of the relationship between conversational participants and that this awareness generates a framework with which to understand the role of each person within a communicative interaction. A footing shift, leading to an interpreter-generated utterance, supports the notion that an interpreter is an active participant within an interaction— one who makes decisions and directly engages during the conversation. 154 : Erica Alley

Anticipating customer needs is just one way in which CAs actively pursue customer service. As can be seen in the examples provided earlier, customer service may include anticipating the needs of the deaf or the hearing caller. Other ways in which CAs anticipate the needs of callers include technology support, offering to redial a call that did not successfully connect, offering to place additional calls, and many others.

Offering Explanations and Justifications As with other providers of customer service, CAs may reach a point where they cannot resolve a problem. It is at this point, within the customer service industry, that the customer service agent explains why they cannot resolve the problem and what other actions, if any, may be taken. Rafaeli et al. (2008) refer to this as an instance where procedure or protocol prohibits the agent from fulfilling a customer’s request. In these instances, the customer service provider volunteers an explanation or justification of what is happening at that moment to serve the caller’s needs. For the purpose of example, let’s consider a call in which a customer is attempting to gain access to information on their phone bill. The customer calls their phone provider and asks, “This month’s bill is $10 more than previous bills. What happened?” The customer service representative says, “I understand how frustrating that can be. Let me look at that for you. Before we get started, for security reasons, may I have your phone number, address, and the last four digits of your Social Security number?” This question is a common initial interaction between a customer service agent and a customer. The agent offers an explanation for why they need particular sensitive information or why it is that they can or cannot do something. Similar interactions may occur between a CA and a caller communicating through VRS. For example, Holly, who has been working in VRS for 4 years, shares a strategy that she uses when she needs to switch with another interpreter during a call. She says: When I’m ready to switch with a team on a call, or if I have to leave, or I need a break, I’ll let [the callers] know by saying, “Just to let you know my time is up and I need to leave. I’ve already called for another interpreter. They’re on their way. When they sit down I’m just going to explain what’s been going on and then we’re going to go ahead and switch.” Interpreter in a Technological Environment  :  155

Holly explains what she is doing and why she is doing it. Similarly, Riley, who has been working in VRS for 12 years, uses the same customer orientation behavior when she asks a deaf caller for clarification. She informs the hearing caller that she needs them to hold for one moment so that she can pursue clarification from the deaf caller. In the absence of this information, Riley knows that the hearing caller may wonder why there is a long period of silence. By offering an explanation for what she is doing, she is indicating to the hearing caller that the hold is based on her need for clarification as an interpreter. Riley goes on to explain a specific time that she used this strategy. She says: I misunderstood something. It was a health intake type of thing and I misunderstood. I thought the woman was talking about chewing tobacco, but she was actually talking about gum. So, of course, I needed to correct it. So now I have to say to the deaf person that I was wrong and “do you mind if I let them know?” And she didn’t care, I mean thank god that was easy. The deaf person hadn’t told me not to announce or anything, but it hadn’t been announced. And so, I just asked her for permission first. I asked, “is it ok if I tell her [that there is an interpreter on this call]? Because I misunderstood you and I want to fix it”—and she had no problem with that. Similar actions were expressed throughout interviews in regard to explaining limitations of technology (e.g., inability to place an outbound call for a caller who recently received a call through VRS) as well as environmental struggles when interpreting a call (e.g., poor lighting, blurry screen). Offering explanations throughout a call aligns with the customer orientation behaviors identified by Rafaeli et al. (2008). Such behavior offers transparency to conversational participants and may foster the customer’s perception of customer service. Educating the Customer The third customer orientation behavior identified by Rafaeli et al. (2008) is educating the customer. Education is often used in an effort to foster the customer’s ability to resolve an issue should it come up again in the future. An example of this may be seen in calls placed to customer support representatives who focus on technology troubleshooting. These calls may begin with “Did you try rebooting the device?” or “Have you tried unplugging the router and then reconnecting?” The call often ends 156 : Erica Alley

with “I am so glad that we resolved your concern. I recommend that if this happens in the future, you follow similar steps to resolve the problem. If that doesn’t work, you can always call us back at . . . .” Although this is a simplified example of a technology concern, the effort to educate the caller to resolve their problem in the future is representative of the behaviors Rafael et al. (2008) describe. CAs reported having taken similar actions in response to perceived concerns in the VRS setting. For example, Cailin, who has been working in VRS for 5 years, shares that she has educated callers about how to effectively use the various products or options offered by her VRS provider. She says: The company I work with has a lot of features on the phone that empowers the deaf person. For example, when you are going through a phone tree the deaf person can push the buttons. A lot of people don’t know that so any time that happens I make sure that they know because most people have said “oh my gosh, I didn’t know I could do that!” I explain that they can do their own credit card, their own social security number, whatever. And they get really excited. The feature Cailin is referring to may bring peace of mind to callers who are concerned about sharing their personal information with a CA. Explaining that callers have the ability to protect this information is an example of education, as Rafaeli et al. (2008) describe it, and would likely result in greater customer satisfaction. Similarly, Red, who has been working in VRS for 7 years, shares that he works to educate callers about the various meanings behind what he hears on the phone. He says: I’ll do stuff too where you can educate them on the phone culture of things. For example, if I’m hearing music, it means the hearing person can’t hear me. Or sometimes the deaf person calls a phone and it doesn’t ring. You just get the machine and they’ll say, “oh just try again” and I explain, “just to let you know, if it doesn’t ring it means the phone is off.” We can try again all day but it’s gonna be off—I want you to know that the phone is off because as a hearing person, ­culturally, I know that the phone is now off. Actions aimed to educate callers were also expressed by participants in response to the perceived confusion of hearing callers when using VRS. For example, Rose educated a hearing caller by assuring the caller that Interpreter in a Technological Environment  :  157

simply accepting a VRS call would not lead to the requirement to pay for using the service. Similarly, Riley educated a hearing caller that the caller could speak directly to the deaf person without the use of “tell him . . .” and that she would interpret what the caller was saying. The information used to educate callers is beneficial for the callers in that moment, but it can also be applied to future calls and would increase customers’ satisfaction with VRS services. Providing Emotional Support In an effort to provide quality customer service, call center representatives often use positive or supportive emotional statements (Rafaeli et al., 2008). This behavior commonly appears in statements such as, “Have a great weekend!” However, it may also be present in more complex interactions. For example, a customer on the phone with an insurance company as they address the medical bills of a recently deceased family member may be told, “I’m so sorry for your loss.” Similarly, a caller on the phone with a travel agency as they plan their honeymoon may be told, “Congratulations on your wedding!” In a VRS environment, CAs shared that they respond in a similar manner. By way of example, a deaf caller who expresses having been on hold for a long time waiting for an interpreter may be told, “So sorry that took a while. I’m sure that’s frustrating. I’m here now. Are you ready to call?” By openly sympathizing with callers who express frustration, the CA is building rapport that may increase customers’ feelings of satisfaction with service. Columbine, who has been working in VRS for 6 months, has a technique that she commonly uses with callers who are frustrated with phone trees. She shares, “I will let the deaf person know that I hate these systems too, but we got to get through this.” She goes on to explain that commiserating with the caller is a method of showing support and alignment with them. This commiseration may ease the tension in a stressful situation where a caller simply wants to complete a task as quickly and efficiently as possible. Although many calls made using VRS are everyday business interactions, VRS is also used for emergency situations. Cailin shares a story about a call placed to 911 where her actions can be seen to align with customer orientation behavior devoted to call center representatives’

158 : Erica Alley

use of emotional support, as described by Rafaeli et al. (2008). She shares: He’s on the floor yelling help, so I called 911 and I gave him all the information and I stayed on the line until they got there, and it was terrible, because he was yelling help and he was getting quieter and quieter and quieter. And I was on the screen and I couldn’t see him, I could only hear him. And one feature that we have that I like is that we can type on our screen a message. So, I just typed on the screen that the paramedics are on their way and I just stayed there because there was nothing else to do. Cailin goes on to explain that she stayed on the line for 18 minutes waiting for the paramedics to arrive. There was nothing to interpret, yet she provided emotional support to the caller in a time of need. Red uses a similar technique when interpreting an emotional call between family members. In the situation shared, the deaf caller was crying and, due to the use of VRS, the CA could see the deaf caller on the monitor, but the hearing caller was unaware of the visible emotional state of the deaf caller. Red shares: I typically become more of a narrator than the embodiment of the person because you don’t want it to look like you’re being pejorative by taking on their affect with too much extremity. So, you know, you have a somber tone and you’re like, “they’re really crying” or “they’re having trouble pulling themself together.” Providing emotional support to the caller is an example of an action CAs use in an effort to provide quality customer service. Offering Personalized Information As suggested in the Rafaeli et al. (2008) exploration of customer orientation behaviors, call center employees can be seen to offer information to callers that does not directly respond to their reason for calling, but may be beneficial to them in the long run. For example, a customer calling a phone company to change their address may appreciate the call center employee informing them that they qualify for a new promotion offering a cheaper price for their service. Offering personalized information is not the same as reading a script to provide all customers with the same

Interpreter in a Technological Environment  :  159

information. Rather, this customer orientation behavior involves tailoring information provided to a particular customer’s unique needs. In reference to the significant statistics associated with this particular customer orientation behavior, Rafaeli et al. (2008) stated, “The finding highlights the merit of a customer orientation that looks beyond the resolution of specific problems and that considers the customer a holistic entity having a long-term relationship with the service organization” (p. 250). In this same vein, CAs in a VRS call center can be seen to utilize similar actions. For example, Red shares a strategy that he uses when the hearing caller seems hesitant to accept a call through VRS. He says: I will tell them, if they’re a female, “get a female and just go through a ‘do not announce.’” Or if I’m a male and they’re a male we can just be like, “I can call again and if we don’t explain [that there is an interpreter] it should be smooth.” Red is referring to a common technique used by callers who do not want the hearing participant to know that there is an interpreter on the line. They will tell the CA “do not announce,” and the CA will process the call without mentioning that the call is being placed through VRS. As an experienced CA, Red has found that this strategy is effective when the hearing caller does not accept a call placed through VRS. Similarly, Levi, who has been working in VRS for over 12 years, shares a story about a caller working with their insurance company in regard to a car accident. He says: There was a vehicle accident, So, we’re talking to the insurance [company], then we also have to talk to I think a lawyer—I think it was a lawyer, it was some other entity. The insurance company didn’t have information so we hung up, called this other entity, talked to them. Then they didn’t know the information, so we hung up with them and called back the insurance [company]. It was at that point when the insurance [company] asked more questions that I suggested “you know there’s the capability of a three-way call. We can all get on the same call” and just that little piece of information that yes this technology is available to you [was helpful]. Now I know this person uses three-way calling often. Offering personalized information is a customer orientation behavior that may improve the customer experience and encourage customers to continue to use a particular service. Throughout interviews, this tactic 160 : Erica Alley

was used in reference to emergency 911 situations (e.g., sharing visual information), calls that experience technological difficulty (e.g., blurry picture), and calls that have characteristics that the CA thinks are relevant to either the call or the interpreting process (e.g., strong accents).

Discussion

As the delivery of interpreting service shifts from a strictly on-site, faceto-face interaction to a computer-mediated technological environment, interpreters must reframe the lens we use in consideration of our work. The overarching federal rules that dictate the VRS environment, as well as the individual corporate policies that govern the work of CAs, influence the decision making of interpreters. Professional autonomy can exist only in environments that recognize workers as professionals and allow space for independent action. Evaluating the effectiveness of a CA’s work based on quantitative measures of efficiency encourages them to avoid independent decision making. Regardless of the restrictions to professional autonomy in a VRS environment, CAs can be seen to prioritize customer service. This is evident in the decisions made by CAs during a call, such as the decision to stay past the end of their shift in order to complete a call despite the preference of the VRS provider to call a team and switch before the end of their shift. Similarly, it is seen in the decisions made by CAs outside of a call, such as the decision to advise a caller to not announce the presence of the interpreter so that the hearing caller does not refuse service. Interpreters discover strategies on the job and often by trial and error. Each time a strategy is discovered, it is added to their personal list of tools to provide effective customer service. Of particular interest is the alignment between the customer orientation behaviors exhibited by call center employees (Rafaeli et al., 2008) and the actions of CAs in a VRS environment. CAs use all five of the customer orientation behaviors evident in the work of call center employees, including: (1) anticipating customer requests, (2) educating customers, (3) offering personalized information, (4) providing emotional support, and (5) offering explanations or justification. This chapter does not intend to overlay judgment on this shift but rather identify similarities among the work that is done in VRS and that which is done in other call center environments. Exploring VRS through this lens allows space to further investigate the work of interpreters in a VRS environment. Interpreter in a Technological Environment  :  161

Note

1. All references to participants in this study use pseudonyms chosen directly by the research participant to refer to the participant’s comments throughout publications and presentations developed from interviews. Pseudonyms are used to maintain the confidentiality of the participants in this study.

References

Alley, E. (2019).  Professional autonomy in video relay service interpreting. Washington, DC: Gallaudet University Press. Americans With Disabilities Act of 1990. Pub. L. No. 101–336, 104 Stat. 328 (1990). Bolton, S., & Houlihan, M. (2005). The (mis)representation of customer service. Work Employment & Society, 19(4), 685–703. Retrieved from http://wes. sagepub.com.pearl.stkate.edu/content/19/4/685.full.pdf html Braverman, H. (1998). Labor and monopoly capital: The degradation of work in the twentieth century. New York, NY: Monthly Review Press. Brophy, E. (2011). Language put to work: Cognitive capitalism, call center labor, and worker inquiry. Journal of Communication Inquiry, 35(4), 410–416. Charmaz, K. (2001). Grounded theory. In R. Emerson (Ed.), Contemporary field research (2nd ed.). Prospect Heights, IL: Waveland Press, Inc. Goffman, E. (1981). Forms of talk. Philadelphia, PA: University of Pennsylvania Press. Llewellyn-Jones, P., & Lee, R. (2013). Getting to the core of role: Defining interpreters’ role-space. International Journal of Interpreter Education, 5(2), 54–72. Marks, A. (2015). Investigating footing shifts in video relay service interpreted interaction. In B. Nicodemus & K. Cagle (Eds.), Selected papers from the International Symposium on Signed Language Interpretation and Translation Research (Vol. 1) (pp. 71–96). Washington, DC: Gallaudet University Press. Metzger, M. (1999). Sign language interpreting: Deconstructing the myth of neutrality. Washington, DC: Gallaudet University Press. Peterson, R. (2011). Profession in pentimento: A narrative inquiry into interpreting in video settings. In B. Nicodemus & L. Swabey (Eds.), Advances in interpreting research: Inquiry in action (pp. 199–223). Amsterdam, the Netherlands: John Benjamins. Rafaeli, A., Ziklik, L., & Doucet, L. (2008). The impact of call center employees’ customer orientation behaviors on service quality. Journal of Service Research, 10(3), 239–255. 162 : Erica Alley

Roy, C. (2000). Interpreting as a discourse process. New York, NY: Oxford University Press. Sandstrom, R. (2007). The meanings of autonomy for physical therapy. Physical Therapy, 87(1), 98–106. Sznelwar, L. I., Mascia, F. L., Zilbovicius, M., & Arbix, G. (1999). Ergonomics and work organization: The relationship between Tayloristic design and workers’ health in banks and credit cards companies. International Journal of Occupational Safety and Ergonomics, 5(2), 291–301. Witter-Merithew, A., Johnson, L., & Nicodemus, B. (2010). Relational autonomy and decision latitude of ASL-English interpreters: Implications for interpreter education. In L. Roberson & S. Shaw (Eds.), Proceedings of the 18th National Convention of the Conference of Interpreter Trainers (pp. 49–66). Fremont, CA: Conference of Interpreter Trainers.

Interpreter in a Technological Environment  :  163

Chapter 9 Deaf Employees’ Perspectives on Signed Language Interpreting in the Workplace Paul B. Harrelson

Work brings a host of benefits to the individual. Studies have shown that active participation in the workforce provides increased self-esteem, positive health outcomes, and economic self-sufficiency. Waddell and Burton (2006) conducted a review of studies comparing work with unemployment, health effects of reemployment, and the effects of work on people with various illnesses and disabilities. The studies support the commonsense assumption that work benefits the health and well-being of individuals, provided the individual has “a good job” (Waddell & Burton, 2006, p. 34). According to Waddell and Burton (2006), four characteristics of a good job include a workplace that: (1) provides an environment that is accommodating, supportive, and nondiscriminatory; (2) offers control and autonomy; (3) leads to job satisfaction; and (4) fosters good communication (p. 34). For employees who are Deaf, these four elements may be experienced differently than non-Deaf employees in the same workplace. In the United States, Deaf people who use a signed language are viewed both as a linguistic and cultural minority and as a protected class of citizens with a disability and rights to workplace accommodation (Lane, Hoffmeister, & Bahan, 1996; Padden & Humphries, 1988). For almost 50 years, federal laws, including the Rehabilitation Act of 1973 and the Americans With Disabilities Act of 1990, have mandated reasonable accommodation for people with disabilities in the workplace. For Deaf workers, this may mean the provision of signed language interpreters to enhance access to communication. Initially, interpreters were only provided in settings that received federal funding, but later, legal protections were expanded to cover a broad range of settings including places where consumers received products and services, private workplaces, and services and employment by state and local governments. 164

As recognition of the importance of legal protections grew in the late 1960s and early 1970s, studies conducted at the time focused on rehabilitation and workplace success. In one of the first studies of Deaf professionals in the United States, Crammatte (1968) examined various aspects of their professional life and cited communication as one of the “on-thejob problems” (p. 88). In his analysis, Crammatte attributed success to Deaf employees who speak and lipread, while scant mention is made of the use of interpreters in the workplace. His study further puts the onus of communication on Deaf employees by suggesting that they should take personal responsibility for successful communication in order to perform the functions of their jobs. More recently, literature about interpreting for Deaf employees in the workplace has expanded. For example, Hauser and Hauser (2008) describe the designated interpreter model used in some workplaces in the United States and highlight interpreter decision making related to language register and variation, filtering environmental information, logistics of interpreter placement, and other factors that make the work of these interpreters unique. They argue that the designated interpreter model provides a level of “seamlessness” that would be unlikely to be achieved with even a highly trained and experienced ad hoc interpreter. In 2008 and 2009 articles, Dickinson and Turner described the issue of interpreters’ role conflict and role confusion in workplace settings in the United Kingdom by examining data derived from interpreter journals along with other sources. Dickinson and Turner traced the source of this conflict, and the resulting interpreter “guilt, anxiety and frustration” (2008, p. 231), to unresolved contradictions about perspectives on interpreter role and the degree to which the interpreter is an active participating third party in the interaction and ultimately in the workplace. Dickinson (2010, 2013, 2014, 2017) observed that the frequent presence of the same interpreter in the workplace may parallel the benefits described by Hauser and Hauser (2008); however, she cautions that this familiarity may lead to the crossing of personal and professional boundaries that, ironically, the earlier conduit model—the metaphor of the interpreter as an “interpreting machine”—was intended to correct. Dickinson argued that the interpersonal risks inherent in workplace interpreting require a highly trained, self-aware, and reflective signed language interpreter. Through the author’s professional experience and anecdotal evidence from members of the Deaf community, it was expected that this study Deaf Employees’ Perspectives on Workplace SLI  :  165

would identify concerns about the amount and quality of interpreting services available in the workplace. In a quest to explore this assumption, this study updates and extends earlier work by investigating the perceptions of communication access in the workplace for Deaf people who communicate exclusively or primarily through American Sign Language (ASL). Specifically, I seek to illuminate Deaf employees’ perceptions of the role played by signed language interpreters.

Methods

Participants Eight Deaf employees participated in the study in two separate focus groups. Two participants identified as female and six as male. Seven of the participants were between 40 and 50 years old, and one participant was between 20 and 30 years old. Seven participants identified as Caucasian or white, and one participant identified as Hispanic/white. The participants held the following educational degrees: associate’s degree (n = 1), bachelor’s degree (n = 1), and master’s degree (n = 6). Six participants had 15 to 20 years of experience working in federal government settings. Their tenure with their current employer varied from 6 months to 6 years. Participants’ job titles indicate professional positions in line with their educational attainment (e.g., specialist, analyst, officer), and they worked in a variety of white-collar professions, including human resources, procurement, graphic design, ethics compliance, workplace health and safety, finance, and information technology. Among the participants, there was one attorney and one participant identified as having a supervisory role. Participants reported federal General Schedule (GS) pay grades roughly evenly distributed between GS-11 and GS-15, which indicates salaries of approximately $73,270 (GS-11, step 5) to $145,162 (GS-15, step 5) per year (Office of Personnel Management, 2016, n.d.). Participants were recruited through the researcher’s personal contacts in the Washington, DC, metropolitan area and were compensated $20 for their involvement in the study. Materials The researcher developed a set of questions for use with the focus group that included the following topics: satisfaction with job, workplace, 166 : Paul B. Harrelson

workplace communication, and interpreters, with an emphasis placed on the latter two topics. A set of secondary prompts focused on interpreter performance; relationships and connections between Deaf workers, hearing colleagues, and interpreters; and logistical questions about how Deaf workers secure interpreters. Procedures Participants were recruited by e-mail to participate in a focus group. Upon arrival to the testing site, each participant was provided refreshments and completed background information, consent to participate, and video consent forms. The focus group was conducted in a private conference room on the campus of Gallaudet University and videotaped using two cameras, with each camera capturing participants on opposite sides of a conference table. Once participants settled into their chairs, I reviewed the consent form, described the study, and began by asking the first question. I moved to subsequent questions once participants seemed to exhaust responses to the previous one. Focus groups lasted approximately 90 minutes. Analysis Videotaped interviews were viewed multiple times to allow preferences and perspectives, themes, and categories to emerge in an iterative process. Portions of the data illuminating preferences and perspectives on workplace communication were translated from ASL into English. I first performed open coding on the translated text followed in subsequent reviews by iterative focused coding.

Results

Key findings from the study group fall into four themes: (1) interpreter boundaries, (2) interpreter monitoring strategies, (3) impromptu interpreting, and (4) engagement with institutional systems. Interpreter Boundaries Participants discussed various issues tied to boundaries with interpreters. The topics related to boundaries included: (1) interpreter conveying Deaf Employees’ Perspectives on Workplace SLI  :  167

information to others, (2) interpreter relationships with hearing colleagues, and (3) small talk between Deaf consumer and interpreter. Participants provided examples of interpreters conveying information to others and “stepping out of role” or being “too comfortable.” For example, one participant remarked about what is most bothersome, by saying: Brian:1 When they act like they are one of the team. I don’t want to say “Know your place” but that is kind of what I mean. For example, if a coworker asks the interpreter “What did Brian say about X?” a good interpreter would say “Ask Brian.” The interpreter should not answer the question themselves. I want to know if this coworker didn’t understand me and he should ask me directly. The interpreter should not assume the role of behaving as if they know what I would say and responding. That isn’t their position. Similarly, one participant expressed concern that conveying information about the Deaf consumer reinforced the notion that the interpreter was the employee’s personal assistant. He stated: Joshua: The interpreter needs to reinforce that they are not my personal assistant because it can really cause perception problems. I’d rather them say “Just wait until Joshua gets back.” The challenge is how to approach that the right way without sounding rude. It can be a sticky situation especially if the interpreter is there frequently and the office is comfortable with them. It is human nature. In addition, participants sometime find interpreter relationships with hearing colleagues in the office problematic. One participant expressed this concern by stating: Nathan: Even though the interpreter isn’t staff, she is there four days each week and I see that kind of thing happening. I’m trying to figure out how to fix that. She and another woman in the office are friends. I notice they go out to lunch together. It is fine, but .  .  . it happens fairly often. Other interpreters go to lunch and eat on their own. I see her eating with other staff more than I do. I think she is a little too comfortable . . . too much at home. This participant was clearly concerned about the friendship and compared it unfavorably to his own relationships with hearing people in the office. He further stated that interpreter relationships with hearing 168 : Paul B. Harrelson

coworkers in the office resulted in interpreters chatting with hearing people excessively prior to the start of a meeting, not attending to their interpreting in those moments, and missing information. He also mentioned lack of interpreter availability during the lunch break, which was exacerbated by the office friendship. In this example, the interpreter crossed an unexplored boundary. Finally, although some Deaf workers did not support interpreters’ connections to hearing coworkers in the office, conversely, they may expect a level of personal conversation and connection between themselves and the interpreter. One participant, Larry, stated that he uses conversations to gauge an interpreter’s connection to the Deaf community, dedication to the profession, and ASL fluency. He said, “Are they just in this for the money? After the assignment are they willing to make that personal connection? It’s important. That is part of our culture.” Variations on this perspective were expressed by another participant: Mia: I also don’t like it when interpreters aren’t warm and friendly. I have one interpreter who gives me the cold shoulder and I don’t like that. I’m really gregarious and like to chat and connect. I’ve mentioned to this interpreter “It seems like you’re really quiet. Is there something wrong or is that just your personality?” I’m not scared to push that a bit. And I really am curious. In this case, the probing did not help, and the afternoon progressed with the Deaf employee working and the interpreter looking on awkwardly in silence. Perspectives on the value and need for small talk with interpreters varied among participants. One participant, Kelly, remarked that she likes interpreters to be “friendly but I don’t want to chat a lot with the interpreter. Mia and I are complete opposites. I have a lot to do. After a brief friendly greeting and a minute of small talk, that’s enough. That’s just my personality.” Participants gave several examples of boundaries both in the workplace and in the relationship to the Deaf community. Several participants mentioned that interpreter involvement in the Deaf community felt less engaged in the Washington, DC, area compared with other states. One participant hypothesized about what may drive the perception that interpreters are reserved or aloof: Joshua: I’ve noticed that interpreters often stay somewhat removed from the community because of concerns about “information leak.” Deaf Employees’ Perspectives on Workplace SLI  :  169

I notice when I’m chatting with interpreters in social situations they may slip and mention something that tells me who they work with. The Deaf community is small, and it doesn’t take much to figure out who they’re talking about. It can be really disconcerting for them when it happens. I suspect some interpreters have those boundaries because they don’t want to slip and divulge information about their consumers. Interpreter Monitoring Strategies Participants monitor interpreter performance. Sighted Deaf consumers have visual access to the interpreter’s ASL production and can easily monitor the quality of the target language production, but participants also commented on their strategies for monitoring interpreters’ English production in several ways, including: (1) gauging apparent misunderstandings, (2) speech reading, (3) attempts to trigger specific English lexical production, (4) using trusted interpreters as informants, (5) observing interpreter behavior, and (6) intuition. First, several participants described drawing on apparent misunderstandings during the conversation to provide clues about interpreter performance. Brian remarked, “I think you have to watch how the communication is going. If it is with my boss or someone and there are a lot of misunderstandings, I know something is wrong here. I’ll say, ‘wait a minute.’” This participant mentioned several ways of handling the misunderstandings from attempting to clarify on the spot with the same interpreter, following up later with a different interpreter, clarifying using e-mail, or simply ignoring it. Second, participants stated that they monitor interpreters’ English production by speech reading. As Nathan said, “Most of the time I just pay attention to what they’re saying by reading their lips. If I catch something wrong, I make a correction. Like Brian said that happens a lot with new interpreters if they’re fresh out of college. I’ll need to interrupt and make corrections.” Deaf consumers monitor specific interpreter lexical choices while the interpreter is working into English. As one participant put it: Brian: Sometimes I’ll catch interpreters who are really making a poor choice of words . . . words that I would never use. I have to stop, make a correction, and then move on. That happens fairly often. I select the words I use carefully. I make clear to interpreters the vocabulary 170 : Paul B. Harrelson

I expect. Interpreters who work with me long enough are right there with me and do a nice job. Third, participants also reported making decisions about their own ASL production based on their predictions of the English word choices of interpreters. One participant who has a large extended Deaf family and signs ASL in social situations signs in a much more linear, “English-like,” manner at work because of his perception that it is easier for the interpreter and, further, that he can exercise more control over the resulting English word choices. Brian: At work I sign in a way that doesn’t require that much interpreting. They really only need to transliterate. A good interpreter will notice that and follow along. Some new, less experienced, interpreters will do a lot more work than they have to and try to interpret. I tell them “You don’t have to! I’m doing your job for you! Just say the words that I’m saying!” A fourth way participants reported they evaluate interpreters is by asking other trusted interpreters. The definition of “trusted” was relative. In meetings with two interpreters, one participant explained that he would ask the interpreter he has known the longest about the quality of the work of the other interpreter. Several participants commented that apparent high-level fluency when working into ASL may not correspond to highly effective interpreting into English. Fifth, participants reported observing interpreters’ behaviors to assess the quality of their work, including asking for or receiving a feed, overusing specific ASL discourse markers, and pausing target language production while listening to the source language. Participants in these focus groups did not mention feeds as a strategy to ensure a high-quality target language product; rather, they commented that it was indicative of a problem. Interpreters overusing ASL discourse markers intended to hold the floor was also discussed as problematic. “If they use a filler sign a lot you know they’re missing a lot too. . . . You know, [averted gaze, nodding, and discourse marker], you can tell the interpreter is buying time and not interpreting things.” Much like the previous point, pausing while listening, even without holding the floor, was not described in a positive manner. Finally, several participants mentioned that Deaf workers use the gestalt of the interaction to monitor and evaluate interpreter performance. Said Nathan, “I pretty much follow my instinct. Does it feel right?” Deaf Employees’ Perspectives on Workplace SLI  :  171

Impromptu Interpreting Participants expressed varying degrees of interest in impromptu interpreting. This type of interpreting relies on having an ad hoc interpreter available. Deaf consumers use impromptu interpreting for: (1) strategic information gathering, (2) general networking, (3) small talk with hearing colleagues, and (4) brief unscheduled meetings. This chapter explores the first of these functions. Participants engage in strategic information gathering using an ad hoc interpreter. They identify an informant in the office, cultivate a relationship, and then receive information about office politics. In the following passage, the participant describes building a friendly relationship with the department secretary: Brian: I’ve started using the interpreter a lot to talk with the secretary and I’m really close to her now. She tells me everything that goes on in the office. Everything. I’ll frequently know things that are going on in the office before my boss does. It’s almost like a backup communication system. If I miss something, she fills me in. She tells me everything. I often know more about what is going on behind the scenes in the office than other hearing colleagues. Of course, occasionally she’ll ask me to do favors for her too and that’s fine. For example, I’ve given a talk about Deaf culture to her son’s Boy Scout troupe and I’m happy to do it. Whatever keeps her happy. A little thing like that really pays off. If I’m running out for coffee, I’ll get her a cup. She appreciates that. My $2 investment yields a wealth of information. You just don’t realize what a difference it makes. My boss mistreats that secretary horribly and so she passes along dirt about him. We have a great relationship. I make it a practice to be nice to all the secretaries. Another participant, Nathan, who had not thought of engaging the interpreter in this way commented, “I have to figure out who that would be for me in my office.” He mused that he always seems to be the last to know when something major is happening in his office and this may be one reason why. Engagement with Institutional Systems Deaf consumers in this study widely viewed their engagement with institutional systems as crucial for increasing satisfaction with workplace interpreting. This includes how Deaf consumers manage bureaucratic 172 : Paul B. Harrelson

systems at their workplace to engage in (1) strategic interpreter selection and (2) strategic interpreter scheduling. This chapter does not explore the important institutional systems that were also discussed (e.g., the mechanics of interpreter requests and approvals, centralized versus decentralized budgeting, agency and interpreter contracts). Participants manage strategic interpreter selection by choosing specific interpreters to match the setting, type and goals of the interaction, participants involved, and relative level of importance of the interaction. Participants reported that ongoing interpreters are more convenient in terms of time saved briefing interpreters. Preferred interpreters are able to provide a higher level of interpreting services and a more seamless experience. Deaf consumers also make careful decisions about interpreters who will be working with them on an ongoing basis. Most participants, with one exception, preferred having a small pool of three to five interpreters with whom they work on a weekly basis. Jason was newly hired and scheduled a different interpreter each day as his on-call interpreter in order to get to know them. “I’ve had a couple of interpreters I’ve ‘interviewed’ over the last two weeks. Of those I’ll pick the interpreter who will become my ongoing interpreter.” A relatively unusual arrangement, another participant manages his own interpreting budget and contract because it allows him to carefully select his interpreters. He contracts with one interpreter who manages the contract and who subcontracts regular days each week out to a small group of different preapproved interpreters. Even with the additional access work required on the part of the Deaf employee, he prefers being able to book the interpreters he wants and make a change if someone is not working out. Participants provide their contracted interpreting agency criteria about categories of interpreters, in this case new interpreters, and use specific settings to evaluate their effectiveness. As Nathan described, “I always tell the interpreter coordinator to only send me new interpreters when I go to training. I generally won’t be saying much during a training and can just watch the interpreter. They just have to sit there and sign. You have to train them.” This example indicates that this participant views the interpreter working from English into ASL during a medium to large group interaction as relatively low consequence and uses the situation as an opportunity to evaluate overall interpreter skill. For this participant, satisfactory performance working from English to ASL suggests future satisfactory performance when interpreting in a more interactive setting. The consumer uses Deaf Employees’ Perspectives on Workplace SLI  :  173

information about an interpreter’s work into one language to make decisions about higher stakes interactive communication in which the interpreter will also be working into their other language. This comment also makes it clear that the Deaf worker feels that he or she has a role in preparing interpreters to become more effective in the workplace. To effectively make use of strategic interpreter selection, participants must have some measure of influence over bureaucratic systems related to interpreting services. Participants in this study all expressed a high level of autonomy related to interpreter selection. As Jason put it, “I have full control. I can select whomever I want.” Brian said, “I have the same person coming on a regular day each week. And I have full control. I can replace someone if they aren’t working out.” Several Deaf consumers discussed engaging in strategic interpreter selection, but they also engage in strategic interpreter scheduling. Deaf consumers also make strategic decisions about when not to have interpreters available in the workplace. Deaf consumers schedule on-call interpreters at specific times for strategic reasons. Brian explained how he manages interpreter schedules during the day in order to limit communication access. Brian: I don’t schedule interpreters after 3:00. My interpreters could work until 4:00. I usually leave for home between 3:30 and 4:00. My boss generally works late and if he asks for a late meeting I can tell him that I can’t because I don’t have an interpreter available. I don’t want to stay that late. You know how the traffic is on the beltway at 5:00. Please. That’s why I leave at 3:30. I’ll just say, “I’m sorry. I don’t have an interpreter. We can meet tomorrow. See you then!” My boss is a lousy scheduler and always asks to see me at the last minute. This way if he asks to see me at the end of the day I have an excuse. My interpreter is my crutch. I’ll say, “I’m sorry, I just don’t have an interpreter.” Participants schedule on-call interpreters on certain days and not others. Brian also explained that he manages interpreter schedules in order to be more productive by limiting communication access. He does not schedule an interpreter on Fridays in order to be left alone so that he can catch up on work that requires focus. Discussion

Through my professional experience and anecdotal evidence from the Deaf community, it was expected that this study would identify concerns 174 : Paul B. Harrelson

about the amount and quality of interpreting services available. Even in workplaces where there is some level of signed language interpretation provided, I predicted that Deaf worker reports of engagement in the workplace and satisfaction with the level of access would be relatively low. In general, this was not the case. Workplace satisfaction considered broadly was not a major emphasis of this study, and most participants in these focus groups enjoyed and were satisfied with their jobs. The focus groups revealed that these participants are dedicated and committed federal employees. Virtually all indicated that pay, the work itself, and a relatively high level of interpreter availability were important to their current job satisfaction. The first overarching finding is that Deaf workers in this study expressed satisfaction with interpreter-mediated communication in the workplace. The overall high job satisfaction and the associated levels of satisfaction with interpreting services were not predicted. This result may be explained by knowing that the participants were relatively well-paid, highly educated Deaf professionals who have learned to navigate the byzantine federal hiring system and ultimately succeed in their careers. Further, they are all federal government workers in the Washington, DC, area, where there are many interpreters available, a high level of awareness among federal agencies about legal obligations to provide access, and a critical mass of federal Deaf employees for networking and discussions about workplace access strategy. The second overarching finding is that Deaf workers spend a great deal of time and energy strategically engaging in order to make interpreting in the workplace more effective. These participants are satisfied because they expend a great deal of effort to achieve satisfying results. They are actively managing interpreters, interpreted interactions, and the levers of the institutional systems within which they operate in order to create the workplace experience that provides satisfactory communication access. This should not be interpreted to suggest that other Deaf employees who are unsatisfied with their communication access in the workplace are somehow to blame. Receptive and well-trained managers, accessibility services professionals, and human resources staff working in a responsive organization are prerequisites. Satisfactory interpreter-­ mediated workplace communication access should not require highly sophisticated and time-consuming efforts in order for it to be effective. However, these focus groups suggest for these participants that this level Deaf Employees’ Perspectives on Workplace SLI  :  175

of time and commitment is necessary for them to maintain the quality of access they experience. Interpreter Boundaries Interpreter boundaries is a catch-all phrase for a wide range of ideas, beliefs, behaviors, and personal alignments. The labels professional and unprofessional, polite and impolite, and rude and friendly may all be applied to the same set of behaviors, and the only difference may be the particular Deaf worker who experiences the behavior. This study does not suggest a one-size-fits-all approach for how interpreters should relate to Deaf consumers, but it reinforces the notion that this is a complex area ripe with opportunities for misunderstandings that cause offense. Boundaries are relationships, and though trite, relationships are complicated. Interpreters and Deaf consumers require a common vocabulary, perhaps a menu, to express preferences and avoid misunderstandings. Interpreter Monitoring Strategies Deaf people monitoring interpreters is an important addition to the list of access work Deaf people perform related to workplace communication. As was mentioned in the second overarching finding presented earlier, not only are Deaf professionals advocating for interpreting services, scheduling services, managing logistics, and, in some cases, overseeing the budget related to interpreters, but they also supervise interpreters by monitoring interpreter performance. Deaf employees communicate their thoughts and monitor their audience just like their hearing colleagues, while at the same time monitoring the interpreter. This may seem obvious, but while monitoring English production, they do it while not having auditory access to the product. This divided attention means that multiple cognitively complex processes are all happening concurrently and the impact of this on all parties and the message is not yet fully understood. These participants reinforced that many Deaf people have the assumption that interpreters should use a one-sign/one-word correspondence while working, and this may indicate a misunderstanding of, or at least a lack of agreement about, the task of interpreting. Several comments indicate that these participants assume that transliterating is easier than interpreting and preferable, and this increases frustration on the part of the Deaf consumer when interpreters do not seem to be willing or able to follow this expressed preference. 176 : Paul B. Harrelson

The manner in which participants described monitoring made it evident that they view themselves as part of the interpreting team. Monitoring interpreter performance may be the result or the cause of their feelings of interpreting team membership, but these participants are aware of the difficulty and unpredictability of the information encountered in workplace settings and actively engage with the interpreter while seeking clarification. Participants described interpreted interactions when both consumer and interpreter grappled to help one another understand the information presented. Impromptu Interpreting Interpreting services are used to replace other types of less effective information gathering (e.g., speaking, lipreading, and typical text-based communication in the workplace) as Deaf professionals recognize the importance of relationship building and workplace intelligence gathering on a professional’s career. Strategic information gathering requires relatively easy access to an available on-call interpreter. The Deaf employee who described this phenomenon most vividly describes a calculated approach to collecting information about office politics while using an interpreter in a way that demonstrates a sophisticated understanding of both effective networking in the workplace and communicating through an interpreter in a way that puts others at ease. Reactions from other participants in the focus group suggest all Deaf workers may not consider this a priority or they may not have the convenient access to an available on-call interpreter that this requires. Some participants commented that they either had not thought of it or they were concerned that it would be viewed as a frivolous, non-work-related use of an interpreter’s time. Engaging Institutional Systems Strategic interpreter selection was not surprising. The anecdotal history of interpreter selection has traditionally been one of Deaf people carefully selecting fluent signers with a “good attitude” or Deaf-parented hearing people as their interpreters. Participants in this study report that when given the chance, they consider carefully the most appropriate interpreter for an assignment. What was surprising was the level of autonomy expressed by most participants regarding interpreter selection within their offices. This finding may not be true of other demographic categories of Deaf workers. Deaf Employees’ Perspectives on Workplace SLI  :  177

A few participants in this study made comments that reveal an orientation toward engagement in interpreter support and development. Deaf consumer investment in the continued improvement of interpreters may have presumed to have gone by the wayside with the increasing shift to academic preparation for interpreters, but several comments indicated this perspective is still evident to some degree in the workplace. Whether this role is collegial, voluntary, and enjoyable—or required and provided grudgingly—is unclear. What is clear is that it is an additional measure of uncompensated labor that Deaf workers provide that their non-Deaf colleagues do not. It is unsurprising that Deaf people request interpreters when they need them. But the comments about strategically not scheduling interpreters was a surprising finding. This concept reinforces the idea that at least some Deaf people have reached the point where they can reliably count on their institutions to provide interpreters and are now free to decide when it makes sense, and actually benefits them, not to have interpreters scheduled. The participant who discussed this idea most thoroughly manages his interpreter schedule in a strategic yet professional way that aligns with institutional goals. This is an imperfect snapshot of the participants of these focus groups. It is reasonable to assume that Deaf people who work in for-profit or nonprofit enterprises, who do not have a college education, or who work in blue-collar positions may have different perspectives on these issues. This group of participants should not be assumed to present a representative picture of the current state of affairs across all federal agencies. Participants mentioned that their current level of satisfaction with interpreting services was very different compared to their experience in previous federal workplaces. It is reasonable to assume that Deaf people who still work in those federal agencies are probably also less satisfied with their jobs and the level of access provided. One participant in this study mentioned that even though he was highly satisfied with his job, workplace, and level of interpreting services he received, he also felt trapped because of the realities in other federal workplaces.

Conclusion

The field of interpreting continues to grapple with understanding quality as a rich description of the service provided and not a simple good/bad 178 : Paul B. Harrelson

interpreter dichotomy. Interpreting is a complex consumer service, and in order to fully understand its implications, the consumers of this service must be consulted. This study reinforces and expands on the need for additional research in this area and gives community validity from Deaf and hard of hearing people themselves that, as consumers of these services, their perspectives are important for formulating federal workplace and interpreter services agency policies, interpreter education curricula, and continuing education opportunities for professional interpreters.

Acknowledgments

This study and the resulting qualifying paper for my doctoral program were conducted with the supportive guidance and supervision of Melanie Metzger. Brenda Nicodemus provided invaluable assistance and encouragement in the transformation from academic paper to conference presentation in preparation for the 2017 Symposium on Signed Language Interpretation and Translation Research and extensive feedback on the version of the chapter for this volume. I thank the warm and attentive audience who attended my presentation and, most importantly, the participants in my study who generously agreed to join my focus groups and share their perspectives.

Note

1. All participant names are pseudonyms.

References

Americans With Disabilities Act of 1990, Pub. L. No. 101–336, 104 Stat. 328 (1991). Crammatte, A. B. (1968). Deaf persons in professional employment. Springfield, IL: Charles C. Thomas. Dickinson, J. (2010). Access all areas: Identity issues and researcher responsibilities in workplace settings. Text & Talk: An Interdisciplinary Journal of Language, Discourse & Communication Studies, 30(2), 105–124. Dickinson, J. (2013). One job too many? The challenges facing the workplace interpreter. In C. Schäffner, K. Kredens, & Y. Fowler (Eds.), Interpreting in Deaf Employees’ Perspectives on Workplace SLI  :  179

a changing landscape: Selected papers from Critical Link 6 (pp. 133–148). Amsterdam, the Netherlands: John Benjamins Publishing Company. Dickinson, J. (2017). Signed language interpreting in the workplace. Washington, DC: Gallaudet University Press. Dickinson, J., & Turner, G. (2008). Sign language interpreters and role conflict in the workplace. In A. Martin & C. Valero Garces (Eds.), Crossing borders in community interpreting: Definitions and dilemmas (pp. 231–244). Amsterdam, the Netherlands: John Benjamins. Dickinson, J., & Turner, G. (2009). Forging alliances: The role of the sign language interpreter in workplace discourse. In R. de Pedro Ricoy, I. A. Perez, & C. W. L. Wilson (Eds.), Interpreting and translating in public service settings (pp. 171–183). Manchester, United Kingdom: St. Jerome Publishing. Hauser, A., & Hauser, P. (2008). The Deaf professional-designated interpreter model. In P. Hauser, K. Finch, & A. Hauser (Eds.), Deaf professionals and designated interpreters: A new paradigm (pp. 3–21). Washington, DC: Gallaudet University Press. Lane, H., Hoffmeister, R., & Bahan, B. (1996). Journey into the Deaf-world. San Diego, CA: Dawn Sign Press. Office of Personnel Management. (n.d.). General schedule classification and pay. Retrieved from https://www.opm.gov/policy-data-oversight/pay-leave/ pay-systems/general-schedule/ Office of Personnel Management. (2016). Salary table 2016-DCB. Retrieved from https://www.opm.gov/policy-data-oversight/pay-leave/salaries-wages/ salary-tables/pdf/2016/DCB.pdf Padden, C., & Humphries, T. (1988). Deaf in America: Voices from a culture. Cambridge, MA: Harvard University Press. Rehabilitation Act of 1973, Pub. L. No. 93–112, 87 Stat. 355 (1973). Waddell, G., & Burton, A. K. (2006). Is work good for your health and wellbeing? London, United Kingdom: The Stationary Office. Retrieved from www .gov.uk/government/uploads/system/uploads/attachment_data/file/214326/ hwwb-is-work-good-for-you.pdf

180 : Paul B. Harrelson

Contributors

Beppie van den Bogaerde Professor Emerita  University of Amsterdam/ Utrecht University of Applied Sciences Amsterdam, The Netherlands Carolina Mmbro Buadee Alumna Western Oregon University Monmouth, Oregon Eline Devoldere Alumna, Faculty of Arts, Campus Antwerpen KU Leuven Antwerp, Belgium Daniel Fobi PhD Candidate (Deaf Education) School of Education, Hilary Place University of Leeds, England West Yorkshire, United Kingdom Aurélia Nana Gassa Gonga PhD candidate Radboud University Nijmegen, The Netherlands Annemiek Hammer Senior Lecturer VU University Amsterdam Amsterdam, The Netherlands Paul B Harrelson Assistant Professor/MAI Program Coordinator Department of Interpretation and Translation Gallaudet University Washington, DC

Sanyukta Jaiswal Associate Professor Department of Hearing, Speech and Language Sciences Gallaudet University Washington, DC 20002 Eric Klein Speech-Language Pathologist St. Mark’s Hospital Salt Lake City, Utah Elisa Maroney Professor Western Oregon University Monmouth, Oregon Brenda Nicodemus Professor/Research Center Director Department of Interpretation and Translation Gallaudet University Washington, DC Jan Nijen Twilhaar Associate Professor Utrecht University of Applied Sciences Utrecht, The Netherlands Brenda Puhlman Adjunct Instructor Western Oregon University Monmouth, Oregon Eli Raanes Associate Professor Norwegian University of Science and Technology, ILU

Contributors  :  181

Faculty for Teacher and Interpreter Education Trondheim, Norway Brenda Seal Gallaudet University Washington, DC Myriam Vermeerbergen Professor and Vice Dean of Education Faculty of Arts, Campus Antwerpen

182 : Contributors

KU Leuven Antwerp, Belgium and Research Associate Stellenbosch University Stellenbosch, South Africa Jihong Wang Lecturer in Mandarin/English Interpreting and Translation The University of Queensland Brisbane, Queensland, Australia

Index

Figures, notes, and tables are indicated by f, n, and t following page numbers. access to environmental description. See haptic signals access to interpreters, 38, 53n8. See also workplace interpreters accommodations, 164 accuracy of interpretation confidence and trust in interpretation and, 139 long time lags and, 120, 125–26 misunderstandings and, 126, 170, 176 monitoring interpreter performance for, 170–71, 176–77 onset time lags and, 115–17, 116–17t prosodic features and, 132–33 short time lags and, 111 Adam, Robert, ix–x Adamorobe Sign Language, 20 ad hoc interpreters, 165, 172 Adu, J., 22 adult skill acquisition model (Dreyfus), 9, 9f, 11 AFILS (French Association of Sign Language Interpreters and Translators), 36, 39–41 Alley, Erica, xii, 147 American Sign Language (ASL) Ghanaian Sign Language and, 20 prosodic features and, 132–46. See also prosodic features of interpreting workplace interpreters, 164–80. See also workplace interpreters Americans With Disabilities Act of 1990 (ADA), 147, 164 Andriessen, D., 7 Arbix, G., 151 ASL. See American Sign Language

Association for the Management of the Fund for the Professional Insertion of Disabled Persons (AGEFIPH), 38, 53n8 associations of sign language interpreters, 36 Association Sourds Interprètes, 40 Australian Sign Language (Auslan). See simultaneous interpreting processing time autonomy of interpreters, 148–51, 161 Bacci, Alain, 42 Barik, H. C., 111 Berthier, Ferdinand, 36 Bologna Declaration (1999), 5–6, 11–12 Bolton, S., 151–52 boundaries. See professional boundaries Brophy, E., 148, 149 Buadee, Carolina Mmbro, x, 20, 24, 28–29 Burton, A. K., 164 call centers. See video relay services CAs (communications assistants), 149. See also video relay services Center for the Advancement of Interpretation and Translation Research (CAITR), ix Center for Translation, Interpretation and Linguistic Mediation (CETIM), 41, 54n19 Churlet, Noémie, 44, 54n23 civic engagement, 45, 52 clarification requests, 67–68, 74–75, 156, 177 Index  :  183

Clerc, Laurent, 37 cochlear implants, 81, 91–92 Codas (children of deaf adults), 37, 92–93, 96–98, 106n2 cognitive balance. See simultaneous interpreting processing time cognitive grammar, 75 Cokely, D., 82, 83, 111, 123 communication in workplace settings, 165. See also workplace interpreters communications assistants (CAs), 149. See also video relay services competency-based learning, 8–11, 9–10f conduit interpreter model, 165 confidentiality, 37, 103, 147 Congress of Milan (1880), 37 consecutive vs. simultaneous interpreting, 23 country names, interpretation of, 50, 50–51f Crammatte, A. B., 165 critical professional attitude, 4, 12–13 critical thinking, 12 cultural brokers, 40–41, 54n17 curriculum, research skills in. See research curriculum in sign language education customer service. See video relay services deaf awakening, 37 deaf-blind persons. See haptic signals deaf community competence in working with, 9 defined, 94 French context for, 36–37 interpreter boundaries and, 96, 169–70 interpreter relations and, 80–107. See also Flemish sign language interpreters news, translation and interpretation of, 44–51, 52 184 : Index

on professionalization of interpreters, 39 recognition of French sign language and, 38 Third Culture and, 93 workplace interpreters and, 169–70 deaf intermediators, 40–41, 54nn17–18 deaf interpreters and translators, 83, 94, 96. See also French deaf interpreters and translators deaf mediators, 40, 54n17 deaf-same principle, 42 Deaf studies, 12–13. See also research curriculum in sign language education Dean, R. K., 26 décalage in simultaneous interpreting. See simultaneous interpreting processing time Defrancq, B., 111 Demand-Control Schema analysis, 26 Denmark, deaf interpreters in, 36 Department of Interpretation and Translation’s (DoIT), ix designated interpreter model, 165 Devoldere, Eline, xi, 80 Díaz-Galaz, S., 110 Dickinson, J., 165 Disability Compensation Benefit (PCH; France), 38, 53n10 discourse markers, 171 Doucet, L., 148 Dreyfus, Stuart E., 9, 9f Dublin Descriptors, 6 ear-voice span in simultaneous interpreting. See simultaneous interpreting processing time EBP (evidence-based practice), 5 ECTS (European Credit Transfer System), 11–12 educational policy, 5–6 education of deaf people, 20–23, 37–38, 81

education of interpreters and translators in Flanders, 81–82 in France, 37, 39–41, 53n6, 54n19 haptic signal instruction, 59 professional development, 26, 31 in research, 3–19. See also research curriculum in sign language education ELAN (video annotation software), 113 emergency VRS calls, 159 emotional expression in haptic signals, 58, 68–71, 69–71f emotional tone and intent of messages, 132–33, 140–41 employment of deaf people as deaf mediators, 54n17 as deaf translators, 51 equal rights laws and, 38 job satisfaction, 175 LSF as official language and, 38 workplace characteristics and well-being, 164 work setting interpretation for, 164–80. See also workplace interpreters employment of interpreters and translators in education settings, 20–35. See also Ghana, interpreters in education settings professionalization of interpreters and, 82–83 volunteering and, 33 in VRS and call centers, 147–63. See also video relay services in work settings, 164–80. See also workplace interpreters end negation interpreting strategies, 121–26, 127 environmental description. See haptic signals ethical code for interpreters, 37, 41 ethnographic videography, 61

European Credit Transfer System (ECTS), 11–12 European Union Bologna Declaration (1999), 5–6, 11–12 evidence-based practice (EBP), 4, 5 Facebook for deaf translation of news, x–xi, 44–51, 45f, 47–51f false starts, 126 Federal Communications Commission (FCC), 147 Federation of Flemish Deaf Organizations (Fevlado), 81–82 feedback signals, 67, 68–72, 69–71f fidelity in interpreting, 132, 133, 140–42 filler signs, 171 fingerspelling, 47–50 Finland, deaf interpreters in, 36 Flemish sign language interpreters, xi, 80–107 background, 81 different groups compared, 101–2 illustrated viewpoints, 99–101, 100–101f methods of research, 84–86, 87–88t position of interpreters, 102–5 professionalization of interpreters and, 81–83 results of research, 89–99, 90t, 95t topic and question for research, 83–84 Fobi, Daniel, x, 20, 22, 24, 30–32 Fobi, J., 22 footing shifts, 154 Foster, Andrew Jackson, 20 French Association of Sign Language Interpreters and Translators (AFILS), 36, 39–41 French deaf interpreters and translators, x–xi, 36–57 deaf intermediators, 40–41 deaf interpreters, 40 deaf interpreters in English-speaking literature, 42–43 Index  :  185

deaf translators, 41–42 French context for, 36–37 future prospects for, 51–52 hearing interpreters, 38–39 Paris Attacks (2015), 43–51, 45f, 47–51f recognition of LSF and, 38 French Sign Language (LSF), 37–38 Gache, Patrick, 42 Gallaudet, Thomas Hopkins, 37 Gallaudet University, ix, 20, 37 gender, vocal expressions and, 140–41 Ghana, interpreters in education settings, x, 20–35 anecdotes of GSL interpreters, 28–32 discussion on, 33 findings of research on, 24–27, 26–27t future directions for, 33–34 limitations of research on, 21 literature review of, 22–23 methods of research, 23–24 Ghanaian Sign Language (GSL), 20 Goffman, E., 74–75, 154 Gonga, Aurélia Nana Gassa, x, 36 Greve, D., 7 Griffioen, D. M. E., 16 Hairston, E., 20 Hammer, Annemiek, x, 3 haptic signals, xi, 58–79 access to environmental description and, 59–60 description of environment, 62–64, 63f description of other persons and actions, 64–65, 65f development of, 60–61 discussion on, 72–76 establishing attention and common arena, 65–68, 66f functions for, 62–71

186 : Index

as interactional approach to communication, 61 interpreting for deaf-blind people and, 59 mediating feedback signals, 68–71, 69–71f methods of research, 61–62 hard of hearing people. See deaf community Harrelson, Paul B., xii, 164 Hauser, A., & Hauser, P., 165 hearing educators and researchers Deaf community and, 93–94 interpreter role, lack of understanding on, 26 research curriculum in sign language education and, 3, 5, 8, 15–16 Houlihan, M., 151–52 Hunt, Danielle I. J., ix image use in deaf translation, 50, 50–51f impromptu interpreting, 172, 177 inaccurate interpretations. See accuracy of interpretation inclusive education, 20–23 innovative professional attitude, 13 inquisitive professional attitude, 12 institutional systems and workplace interpreters, 172–74, 177–78 intent equivalence, 133, 139 internships, 30–33 interpreter educators, 3, 5, 8, 15–16 interpreter-generated utterance, 154 interpreters and translators accuracy and. See accuracy of interpretation associations for, 36 autonomy of, 148–51, 161 confidentiality and, 103, 147 consecutive vs. simultaneous, 23 deaf intermediators for, 40–41, 54nn17–18

deaf interpreters and translators, 83, 94, 96 deaf mediators for, 40, 54n17 education of. See education of interpreters and translators employment of. See employment of interpreters and translators ethical code for, 37, 41 fidelity of, 132, 133, 140–42 Flemish interpreters and Deaf community, 80–107. See also Flemish sign language interpreters in France, 36–57. See also French deaf interpreters and translators gender, vocal expressions and, 140–41 in Ghana, 20–36. See also Ghana, interpreters in education settings haptic signals for, 58–79. See also haptic signals memory of, 73, 127 on-call, 174, 177 pauses in interpreting and, 121, 123–24, 171 professional attitudes of, 11–13 professional boundaries and. See professional boundaries professional competencies for, 8–11, 9–10f professional development of, 26, 31 professionalization of. See professionalization prosodic features and, 132–46. See also prosodic features of interpreting research curriculum in education for, 3–19. See also research curriculum in sign language education simultaneous interpreting processing time, 108–31. See also simultaneous interpreting processing time

video relay services and, 147–63. See also video relay services volunteer work of, 28–29, 33, 52 in workplaces, 164–80. See also workplace interpreters Interprofessional collaboration, 144 Jaiswal, Sanyukta, xii, 132 Jehovah’s Witnesses, 34 job satisfaction of deaf people, 175 Klein, Eric, xii, 132 lag time in simultaneous interpreting. See simultaneous interpreting processing time Lamberger-Felber, H., 111 language, defined, 75 Langue des Signes de Belgique Francophone, 81 Law for Equal Rights and Opportunities, Participation, and Citizenship of Persons with Disabilities (France, 2005), 38 Lee, G. R., 70 Lee, R., 150 Lee, T., 111, 126 lipreading, 165, 170 literal interpretations, 126 literature search skills, 13 Llewellyn-Jones, P., 70, 150 Loctin, Laurène, 44 LSF (French Sign Language), 37–38. See also French deaf interpreters and translators Mantey, K. A., 23 Marks, A., 154 Maroney, Elisa, x, 20 Mascia, F. L., 151 Massieu, Jean, 36 McKee, R., 111, 120, 123 meaning-making, 72, 74 media. See news, translation and interpretation of Index  :  187

meeting with deaf-blind persons, 58–79. See also haptic signals funding for provision of interpreters in, 38 workplace interpreters for, 164–80. See also workplace interpreters memory of interpreters haptic signal use and, 73 time lag in simultaneous interpretation and, 127 message fidelity, 132, 133, 140–42 Mindess, A., 83 minimal response, 58, 65–68, 70 misrepresentations, 133. See also fidelity in interpreting mistakes. See accuracy of interpretation misunderstandings, 126, 170, 176 Monikowksi, C. T., 3 monitoring interpreter performance, 170–71, 176–77 name signs, 47–50, 50–51f Nanabin Sign Language, 20 Napier, J., 111, 120, 123 Nederlandse Gebarentaal (NGT; Sign Language of the Netherlands), 11–14, 12t Netherlands, interpreter education in. See research curriculum in sign language education news, translation and interpretation of future prospects for, 52 social media and, x–xi, 44–51, 45f, 47–51f on television, 38, 44, 54n21 Nicodemus, Brenda, xii, 132 Nijen Twilhaar, Jan, x, 3 Nilsson, A.-L., 121 notetakers, 22 omissions, 111, 123, 126 on-call interpreters, 174, 177 one-sign/one-word correspondence, 176 188 : Index

Oppong, A. M., 20, 22 oral method of deaf education, 37 O’Regan, K., 11 Paris 3 University, 37 Paris Attacks (2015), 43–51, 45f, 47–51f pauses in interpretation, 121, 123–24, 171 people with disabilities, 164 personal information and VRS calls, 157 Peterson, R., 150 phonological cues, 132 Pollard, R. Q, 26 postlingual hearing loss, 23 Praat (prosody analysis tool), 136 processing time. See simultaneous interpreting processing time professional attitudes, 11–13 professional autonomy, 148–51, 161 professional boundaries in Deaf communities, 96, 102–4 deaf intermediators and, 41 for workplace interpreters, 165, 167–70, 176 professional competencies, 8–11, 9–10f professional development, 26, 31 professionalization of deaf intermediators, 40 of deaf interpreters and translators, 36–37, 39–40, 43–44 of Flemish sign language interpreters, 81–83, 102 of interpreter educators, 3, 8, 15–16 prosodic features of interpreting, xi– xii, 132–46 discussion, 139–44, 139f, 140t limitations of research, 142–44 methods of research, 134–36 results of research, 136–39, 137t, 138f Puhlman, Brenda, x, 20 Raanes, Eli, xi, 58 Rafaeli, A., 148, 152–60

reflection and development competencies, 10f, 11 reflective professional attitude, 13 Rehabilitation Act (1973), 164 rehabilitation programs for deaf-blind people, 59 remote interpretation, 38. See also video relay services research, defined, 6–7 research curriculum in sign language education, x, 3–19 current state and future avenues, 14–16 evolving research in professional education, 5–8, 7f professional competencies, 8–11, 9–10f research curriculum, 11–14, 12t research practitioners, 3–4 Research Skill Development Network (Willison & O’Regan), 11 Sandstrom, R., 149 Sangla, Jacques, 42 scheduling of interpreters, 172–74, 177–78 schematization, 46, 55n29 schools for the deaf, 20–21 Seal, Brenda, xii, 132 selection of interpreters, 172–74, 177–78 Shared Dublin Descriptors, 6 Shaw, Emily, ix Signed Language Interpretation and Translation Research Symposium (2017), ix signed language interpreting services. See interpreters and translators Sign Language of the Netherlands (Nederlandse Gebarentaal; NGT), 11–14 simultaneous interpreting processing time, xi, 108–31 literature review, 109–12 long time lags, 125–26

methods of research, 112–13, 130–31t onset time lags and accuracy, 115–17, 116–17t results of research, 113–26 substantial variability in time lags, 113–15, 114–15t time lags, consequences of, 126 time lags and effective interpreting strategies, 118–25 simultaneous vs. consecutive interpreting, 23 Skåren, A.-L., 72 small talk, 169 Smith, L., 20 social awareness, 12 social gatherings, 22 social media for deaf translation of news, x–xi, 44–51, 45f, 47–51f Sorenson Communications, 149 source language intrusion, 126 speech reading, 165, 170 strategic information gathering, 172, 177 strategic interpreter selection and scheduling, 172–74, 177–78 Stroesser, Pauline, 44 Supreme Sign Institute, 34 Sznelwar, L. I., 151 tactile communication. See haptic signals Taylor, J. R., 75 Taylor, M. M., 121 technology limitations in VRS, 156 television news interpretation, 38, 44, 54n21 theory of interaction (Goffman), 74–75 thesis writing, 13–15 Third Culture, 93 Timarová, Š, 110–11 time lags in simultaneous interpreting. See simultaneous interpreting processing time touch communication. See haptic signals Index  :  189

translators. See interpreters and translators transliterating, 171, 176 Trine, E., 24 trust, 103–4, 139 Turner, G., 165 turn taking, 58, 65–68, 66f, 73–74 ungrammatical or unidiomatic use of target language, 126 United States deaf-blind people and the tactile movement in, 60 deaf interpreters in, 36 prosodic features and interpreting, 132–46. See also prosodic features of interpreting tactile sign language in, 60 workplace interpreters in, 164–80. See also workplace interpreters University of Applied Sciences, Netherlands, 4 University of Education, Winneba (UEW), 20. See also Ghana, interpreters in education settings University of Toulouse–Jean Jaures, France, 41, 54n19 Vaes, Lena, 106n6 van den Bogaerde, Beppie, ix–x, 3 Vermeerbergen, Myriam, xi, 80 videophones, 149 video relay services (VRS), xii, 147–63 anticipating customer needs, 153–55 customer service, 151–61 discussion, 161 educating the customer, 156–58 federal regulations and, 147–48 offering explanations and justifications, 155–56

190 : Index

offering personalized information, 159–61 overview, 151–53 prosodic features and, 132–33 providing emotional support, 158–59 VRS and call center work, 148–51 Vlaamse Gebarentaal [VGT], 81. See also Flemish sign language interpreters volunteer work, 28–29, 33, 52 Waddell, G., 164 Wadensjö, C., 70 Wang, Jihong, xi, 108 Websourd (bilingual website), 41–42 Western Oregon University (WOU), 31, 34 Winston, E. A., 3 workplace interpreters, xii, 164–80 discussion, 174–78 impromptu interpreting, 172, 177 institutional systems and, 172–74, 177–78 interpreter boundaries and, 167–70, 176 interpreter monitoring strategies, 170–71, 176–77 literature review, 164–65 methods of research, 166–67 results of research, 167–74 World Association of Sign Language Interpreters, 59 written text in deaf translation, 47–50, 47–49f Xiaoyan Xiao, ix–x Ziklik, L., 148 Zilbovicious, M., 151